Contact | Print 

GSoC 2019 + 2020

During the 2019 and 2020 summers, two students, Salsabila Mahdi from Indonesia and Akshaj Verma from India, enrolled via the Google Sumer of Code program, worked under the guidance of three mentors, Patrice Kiener (InModelia), Christophe Dutang (University of Paris-Dauphine) and John C. Nash (retired professor, University of Ottawa) to evaluate R packages that claim to provide neural networks for regression purpose (the classical multilayer perceptron). Our goal was to verify these claims, especially the quality of the convergence and the ease of use, and make some recommandations on the good and the bad packages.

During GSoC 2019, 49 packages were evaluated with a certain code. Significant results were obtained with a first appraisal about the good and the bad packages but we fell short of time to writing the article.

During GSoC 2020, the work was extended to 60 packages with a new and more flexible code and an article was prepared. The article has been submitted to the R-Journal and is currently under review. The submitted version and the supplementary materials can be read here:

Other ressources:

  • The NNbenchmarck package used in the 2019 and 2020 templates.
  • The NNbenchmark templates used to evaluate the 60 packages, one R or Rmd template and one csv file for the results per package.
  • The results presented as Notebooks for both the 2019 and 2020 evaluations.


GSoC 2019

We reproduce here our call for two students to participate under our guidance to Google Summer Of Code 2019. The original text is published on a Github page under the R-Project umbrella. Students can apply from February 26, 2019 to April 9, 2019, 18:00 UTC as per the GSoC 2019 timeline.


Background

Most R packages that implement neural networks of perceptron type (one input layer, one normalized layer, one hidden layer with nonlinear activation function usually tanh(), one normalized layer, one output output layer) for regression purpose (i.e. NN(X1, ..., Xn) = E[Y], as opposite to classification) use very poor learning algorithm(s) and never find the global minimum of the objective function in the parameter space. Most of the time, a first order algorithm is used when neural networks, as any nonlinear function, require a second order algorithm.

In 2015, Patrice Kiener conducted a private benchmark on more than twenty R packages with a few known datasets. The result was a disaster. More than 18 packages did not converge correctly and only 2 packages found the right values. We feel that an updated and more formal evaluation should be realized and communicated to the whole R community.

On March 2019, 69 R packages have been published on CRAN with the “neural network” keyword either in the package title or in the package description: AMORE, ANN2, appnn, autoencoder, automl, BNN, brnn, Buddle, CaDENCE, cld2, cld3, condmixt, DALEX2, DamiaNN, deepnet, deepNN, DNMF, elmNNRcpp, ELMR, EnsembleBase, evclass, gamlss.add, gcForest, GMDH, GMDH2, GMDHreg, grnn, h2o, hybridEnsemble, isingLenzMC, keras, kerasformula, kerasR, leabRa, learNN, LilRhino, monmlp, neural, neuralnet, NeuralNetTools, NlinTS, nnet, nnetpredint, nnfor, onnx, OptimClassifier, OSTSC, pnn, predictoR, qrnn, QuantumOps, quarrint, radiant.model, rasclass, rcane, rminer, rnn, RSNNS, ruta, simpleNeural, snnR, softmaxreg, spnn, TeachNet, tensorflow, tfestimators, trackdem, TrafficBDE, tsensembler, validann.

We wish all these packages to be analyzed and, due to this large number, invite 2 students to work on this new benchmark under our guidance and publish their results in one task view, one website and in the R-Journal.


Related work

Such a benchmark has never been conducted on R packages so far. This work is acknowledged by AIM SIG who received some funding from the R-Consortium (section PSI application for collaboration to create online R package validation repository). Some connections can also be established with the histoRicalg project.


Details of your coding project

We expect two students with a sound knowledge in nonlinear regression algorithms (BFGS, Levenberg-Marquardt). The purpose of this work is to (1) benchmark the 69 packages (35 packages per student) with 3 to 5 simple datasets; (2) write a comprehensive report on the performance of each package; (3) publish the report in various formats: task view, website, article in the R-Journal.

(1) A simple code to call each package, test them against the datasets and collect the results is to be written. This can lead to a meta-package to ease the benchmark procedure as well as to perform other benchmarks in the future.

(2) The biggest effort will be on writing the results in a nice report, if possible directly in the R-Journal format (which can be easily accessed through the “rticles” package). An introduction to both neural networks and optimization methods is expected.

(3) Extend the report to other communication formats: task view, website.

  • Student 1 will work on 35 packages from letter A to L, i.e. from package AMORE to package LilRhino, publish the results on a website (Rpubs or similar) and write the task view.
  • Student 2 will work on 34 packages from letter Z to M, i.e. from package validann to package monlp and write the article to be published in the R Journal.

The students will work independently, i.e. develop their own code to evaluate each package, prepare the task view, the website and the R-Journal article but will publish the results in a coordinated way. We expect a table with the 69 packages listed with cells filled with marks like (+/-) (good/not good) (1/2/3/4/5 stars) on several aspects: algorithm name, algorithm convergence, external/internal normalization, availability of a predict function, type of the input (data.frame, matrix, vector, formula), ease of use, score, comments.


Expected impact

With this work, we wish to alert R users about the varying performance of neural network packages. Users, both from academia and private companies, should be aware of the strengths and the weaknesses of the packages they use. We expect a bigger impact on package maintainers and authors and hope that such a benchmark will convince them to shift to better algorithms.

Neural networks are understood as black boxes, especially nowadays with the advent of machine learning and artifical intelligence procedures, but a minimum of care and a sound mathematical approach should be taken when writing an R package.


Mentors

Students, please contact the mentors after completing at least one of the tests below.

  • Patrice Kiener is the author of the R package FatTailsR and RWsearch and has 18 years of experience with neural networks of perceptron type.
  • Christophe Dutang has authored or contributed to more than 10 packages and maintains the task views related to Probability Distributions and Extreme Value Analysis. He also had previous GSoC experience with the markovchain package in 2015 and 2016.
  • John Nash is retired Professor of Management in the Telfer School of Management, University of Ottawa. He has worked for approximately half a century in numerical methods for linear algebra and function minimization, in particular with R. See ‘optim()’ and packages such as optimx, Rvmmin, Rtnmin, Rcgmin, lbfgsb3 and nlsr.

Tests

Students, please do one or more of the following tests before contacting the mentors above.

  • Can you explain the difference between first and second order algorithms?
  • Can you cite a few books on this topic? Which one have you read? understood?
  • Is back-propagation necessary for neural networks?
  • Is back-propagation useful for neural networks? Why?
  • How many local minima can you expect after regression?
  • In case of several outputs, why is it better to have one model per output?
  • Give a few examples of your R code.
  • Have you ever contributed to an R package? if yes, which one?

If several students have an equal score after this first serie of questions, a few other (unpublished) questions will be asked.


Solution of tests

Students, please send your test results to Email: ...@inmodelia.com (P. Kiener and C. Dutang).


List your answers to the various questions in the body of the email. For the R code, one attached -.R file at a maximum or one -.7z compressed file that concatenates a few -.R (-.Rmd) files.


Students

Two students have submitted proposals:

  • Student 1: Mr. Akshaj Verma, B.Sc. 4th year, from Bengalore, India.
  • Student 2: Ms. Salsabila Mahdi, B. Sc. 2nd year, from Aceh, Indonesia.

Both Akshaj Verma and Salsabila Mahdi have been accepted on May 6 to participate to GSoC 2019:

Congratulations to them. The hard work starts!


Work done

49 packages/functions/algorithms have been evaluated during the 3 months work on 12 datasets with different levels of difficulty prepared by Patrice Kiener. One package, NNbenchmark and 49 templates, one for each algorithm, have been published on GitHub. HTML pages with figures prepared from Rmardown code have been published on Akshaj Verma's website. The article is under preparation. Visit the summary pages provided by the students:

We are currently preparing the final tables and the article that summarize our findings. In short, 10 packages/functions/algorithms return correct results. All other algorithms can be ignored. Stay tuned for the final report!

© Copyright InModelia 2009 - 2021