A description of your two classification problems, and why they are interesting. Think hard about this. To be at all interesting, the problems should be non-trivial on the one hand, but capable of admitting comparisons and analysis of the various algorithms on the other. In other words, when you run the supervised learning algorithms on the datasets, they should give different results.
The training and testing error rates you obtained running the various learning algorithms on your two problems. At the very least you should include graphs that show performance on both training and test data. For iterative algorithms, you should graph the performance of the algorithm on the y-axis (you should choose an appropriate performance metric) compared against training time (you should choose an appropriate notion of “time,” epochs, iterations?) on the x-axis. You should also plot performance as a function of training dataset size (note that this implies that you need to design a classification problem that has more than a trivial amount of data).
Analyses of your results. Why did you get the results you did? Compare and contrast the different algorithms. What sort of changes might you make to each of those algorithms to improve performance? How fast were they in terms of wall clock time? Iterations? Would cross validation help (and if it would, why didn’t you implement it?)? How much performance was due to the problems you selected? How about the values you chose for learning rates, stopping criteria, regularization, pruning methods, and so forth (and why doesn’t your analysis show results for the different values you chose?)? Which algorithm performed best? How do you define best? Be creative and think of as many questions you can, and as many answers as you can.