I've got a bit fed up with reviewing process done at conferences, and journals. I do not think everything's done bad in all conferences and journals, but there are some severe shortcomings in this process.

I totally accept that my papers are rejected, but I expect good arguments against them in this case. I've recently suffered two bad experiments that I'd like to advertize here, because I am really fed up with that.

So, in this page, I provide the paper as it was submitted, the reviews we got, and my comments. This conference does not offer any way to react as it is now done in some major conference such as ICML, NIPS, IJCAI, ECML, ... and even in much more modest ones.

Since I was not offered a right to answer, I put it on the web, letting google advertize it. Feel free to get in touch with me (philippe -dot- preux -at- inria -dot- fr).

Below you'll find some thoughts about the reviewing process at:

ICANN 2009

We had great reviews from ICANN. The first reviewer seems to have read the paper while sleeping. Below, I copy/paste the reviewing reports we had, and I comment his/her remarks in red, pointing out that he/she is wrong and the information is explcitly in the paper, or should be known by anyone who accepts to review this paper (very basic machine learning knowledge). I was said that this reviewer is senior AE of the IEEE Transactions on Neural Networks and a senior AE of the IEEE Transactions on Instrumentation and Measurement, so I am necessary wrong. Probably, the composition of these committees will evolve in the future. So I provide these compositions at the time I received these reviews: senior AE of the IEEE Transactions on Instrumentation and Measurement in May 2009, and senior AE of the IEEE Transactions on Neural Networks. For completeness, I also give the two other reviews we got for this paper at ICANN'2009.

======= Review 1 =======

> > *** Originality: How would you rate the originality of the paper?
Reject (0)

> > *** Significance of Topic: Is this topic significant to ICANN '09?
Neutral (2)

> > *** Technical Quality: How would you rate the technical quality of this paper?
Reject (0)

> > *** Presentation: How would you rate the presentation of this paper?
Reject (0)

> > *** Overall Rating: Do you recommend acceptance or rejection?
Reject (0)

> > *** Guidance for Authors: Please describe in detail main paper contributions, positive 
aspects, observed deficiencies, and suggestions on how to improve them.

This paper proposes an algorithm to automatically handle the selection of the optimal 
parametrization of the hidden units of neural networks. The motivation of the work, the use 
of multilayer neural network together with one hidden layer perceptrons is not clear. For 
example, in the simulations, the authors refer to multilayer network, but all the proposed 
algorithm seems to be based on one hidden layer perceptrons.
At lines 2-3 of our submission, we read: ``we want to investigate algorithms to build 1 hidden layer perceptrons (1HLP)''
And the very last sentence of the paper reads: ``More fundamentally, we also fore-see the possibility to develop the same kind of approach to synthesize multiple hidden layer perceptrons.''
More specifically, the proposed algorithm (named ECON) should be an improvement of LARS, an 
existent algorithm. Why ECON is based on LARS and not on the other methods cited in Section 
1? My doubt is related to the fact that the LARS algorithm presented in Section 2 is confused 
and no suitable references are provided. The improvement of ECON with respect to LARS should 
be investigated more properly. No comparison between the two methods is provided.
The introduction (section 1) argues about the LARS algorithm, and stresses what we consider a weakness of the LARS (one has to tune the hyper-parameters), and also the reasons why we choose to base our work on LARS.
Section 2 summarizes the LARS algorithm. We can read ``we refer the interested reader to the seminal paper [3]''. [3] is already referenced in the introduction.
Other comments:
- The problem of the first three lines of the Introduction is a classification problem, but
the authors refer to it as a regression problem! 
Ok, at this point, we discover that this expert does not know what a regression problem is. I let the interested reader check by him/herself the first 3 lines of our submission.
All the simulation examples are classification problems, so it is not clear is the algorithm
is good for regression.
Experiment section is on pages 7-9, section 4.
Subsection 4.1 is for pedagogy: we use the 2 spiral arms problem, and exhibit how ECON deals with it. Ok, it is classification. Even though SVM excells at classification, we show that ECON obtains pretty good results, and more is discussed in this section.
Subsection 4.2 is the serious matter; it is entitled ``Experiments on regression problems''! We use Boston housing, Abalone, and house-price-32-8l datasets which are all known as regression datasets.
Table 1 shows that our algorithm clearly outperform SVMs and 2 other algorithms.
- The use of the l_1 or l_2 regularization term in page 2 is well known and usually adopted
in the literature. No new contribution on this point are provided by this paper, so the use
of l_1 regularization in the title is excessive.
LARS is based on l1 regularization; our contributed algorithm ECON is based on LARS, so based on l1 regularization. By putting l1 in the title, we wanted to emphasize the fact that using this algorithm, we will get sparse solution, using a sound approach. For us, this is important, so we put it into the title.
- LARS and ECON are acronyms I think. However, they are arconyms of
  what?
google LARS and see! Ok, we might have expanded it, sorry, our fault. ECON means something but we found no way to explain it in an easy to understand way, so we did not expanded it.
- Figure 1 is unreadable and should be enlarged.
I probably have top of the art material because on my laptop, I see it very clearly, and when I print it on a color printer, that's very readable. Furthermore, I'd love to provide page-wide figures, but we are limited to 10 pages.
- Why does in Figure 1(c) one error increase when the ECON iterations increase? What about
convergence of the method?
I can't say anything else than: come, and please, do attend one of my ML classes. Or, any basic tutorial on ML. If needed, please, feel free to email me. Or read Tom Mitchell's 1997 introduction to ML, or ....
- How data in Table 1 have been obtained? What is the number at the
  denominator? Please explain!
dear reviewer: have you read the caption of this table? and subsection 4.2??? Please, read the paper you are supposed to review!
To sum up, the paper is not suitable to be presented at the ICANN 2009
      conference.
For fun: to sum up, use the clues provided before the review to identify this expert.
======= Review 2 =======

> > *** Originality: How would you rate the originality of the paper?
Weak Accept (3)

> > *** Significance of Topic: Is this topic significant to ICANN '09?
Weak Accept (3)

> > *** Technical Quality: How would you rate the technical quality of this paper?
Weak Accept (3)

> > *** Presentation: How would you rate the presentation of this paper?
Weak Accept (3)

> > *** Overall Rating: Do you recommend acceptance or rejection?
Weak Accept (3)

> > *** Guidance for Authors: Please describe in detail main paper contributions, positive
aspects, observed deficiencies, and suggestions on how to improve them.

The authors should investigate also the theoretical aspect of the
proposed approach and verify the effectiveness of their results via
simulations. In this form, the paper appears more based on "rule of
thumbs."

Thanks a lot dear reviewer 2! We might discuss the rule of thumbed-ness of ECON, but ok, thanks, there is nothing ridiculous in your review.
======= Review 3 =======

> > *** Originality: How would you rate the originality of the paper?
Reject (0)

> > *** Significance of Topic: Is this topic significant to ICANN '09?
Neutral (2)

> > *** Technical Quality: How would you rate the technical quality of this paper?
Weak Reject (1)

> > *** Presentation: How would you rate the presentation of this paper?
Weak Reject (1)

> > *** Overall Rating: Do you recommend acceptance or rejection?
Reject (0)

> > *** Guidance for Authors: Please describe in detail main paper contributions, positive
aspects, observed deficiencies, and suggestions on how to improve them.

The paper presents a method (ECON) which is a LARS-based algorithm, used in the field of
regression.
The presentation is clear and the topic is of value for the conference at hand. However, I
think the article is too weak in terms of originality of the theoretical and experimental
results for justifing publication.
this guy thinks the paper is too weak in terms of originality, but he/she is unable to say more. That's a very easy way to reject a paper.

Frontiers in Computer Science

We got this one in Sep. 2011. We had a paper at the IEEE International Conference on Data Mining 2010, and later we were invited to submit an extended version to this journal. That's what we did and we got these two reviews.

So, we got two reviews. The associate editor concluded that major revisions were needed.

The first reviewer basically asked to re-organize the paper, and make the notation less cumbersome. He, or she, also wrote that we were not citing some papers, omitting to mention which papers we should have cited; we cited 21 papers and we are quite confident that we did cite the papers related to our work.
(I'll put the whole review on-line soon.)

The second review is the following:































This paper is a nicely extended version of the authors' IEEE ICDM '10 paper.

Given that the KAIS journal (http://www.cs.uvm.edu/~kais/) has been publishing the best papers from IEEE ICDM every year, you are advised to try and cite 3 to 5 most recent, relevant papers from this journal in your revised manuscript (where *most recent* means papers published in 2010 and 2011, see http://www.springerlink.com/content/105441/ including the newly published papers under the Online First link), if at all possible. (Each Online First paper should cite its DOI with its online publication year as its publication year.) Citations from other relevant journals and conferences will also be helpful. Please respond in your revision statement with a list of newly added references.
After checking the last two years papers published in KAIS, we found only one dealing with computational advertizing. This paper addresses a totally different perspective than ours, and there is no reason to cite it.