yea that's 4 of the 12 linear discriminants. PCA performed way worse unfortunately.
original data was 232 dims and PCA got it down to about 80 if I wanted to retain ~90% variance. 60 if I wanted ~80%. worked okay in random forests but for boosting and SVMs it wasn't doing well. I ran LDA on the original data and a reduced 80-dim PCA transformation, and the original performed better (what's plotted there).
pretty much just have shit data for any type of linear DR. kPCA would probably be the next move if the goal wasn't to feed it through a few models to discuss in the paper, but at least with this I can just talk about the data being shit. it isn't imperative that I get a good response, I can pretty easily talk about why the response is bad and make that the basis of the paper.
I have to say my brethren I have succeeded on this day. I had many leftovers in the toolbox(es) this morning from the long weekend, mostly beer and wine so I decided why the hell not and enjoyed some good tidings starting at 0630 and continued to spread good cheer from thence forth. It's been a smashing success not in least part to you all's good company and the spirit of the holiday season. Its times like this when I really appreciate good friends and contract business and I truly from the depths of my heart wish each and every one of you good tidings and great joy which shall be for all people.
With love and adoration.