kalimah.top
a b c d e f g h i j k l m n o p q r s t u v w x y z 0 1 2 3 4 5 6 7 8 9 #

j-wong - k-nearest neighbarz كلمات الأغنية

Loading...

k-nearest neighbors, surprisingly it can slay
got these i.i.d women so i call them naive baes
with conditional vision guessing, i guess i should listen
their predictions more accurate than anything i could say, but wait

equal weighter, i’m never a player hater
cuz naive bayes don’t care about the size of my data, now
take a second and pause, just so you know what i mean
y’all tryna muster the cl-sters i developed with k-means

so you know what i mean, do you know what i meant
while i’m stochastically tilting the gradient with my descent
yes i, curved the plane, but it looks just fine
just gotta know the logistic function i usually define

it’s sort of distance weighted, the middle is saturated
so the outliers in the data could never degrade it
i made it, simple, cutting dimensions to one
fishing for data with fisher’s discrimininant till i was done

i’m tryna flatten the data to make some visual sense
go between all the cl-sses maximizing the variance
just wanna categorize, supervised if i can
maximizing it relative to the variance within

but let’s all go back, for a sec in case you missed it
take it back to a regression form that was logistic
i keep it saturated so there’s no reason to doubt, just
1 over 1 plus e to the minus alpha

i like cool people, but why should i care?
because i’m busy tryna fit a line with the least squares
so don’t be in my face like you urkel
with that equal covariance matrix looking like a circle

homie wanna know, how i flow this free?
i said i estimated matrices with svd
x to u sigma v, and with v, just transpose it
i rank-r approximate and everyone knows it

i’m rolling in the whip, ‘cuz a brotha gotta swerve
jay-z’s with roc nation while i’m on the roc curve
true positives is good, so y’all don’t wanna stop that
i took the true negatives out and now i’m finna plot that

but what about the case, where the labels isn’t known yet
i guess i gotta -n-lyze the princ-p-l components
so if anybody really wanna track this
compute for greatest variance along the first princ-p-l axis

taking losses and insults, yeah i don’t like that burn
i prefer loss functions from models up in scikit learn
and if you didn’t catch my lyrics it was right here in the notes, look
open it in your terminal and run jupyter notebook

here you can thread it, if your processor’s cool
or you can add an import line for multiprocessing pool
then p.map it to a whole array until it returns
pyplot your data in a graph and see what scikit learned

got a whole data matrix and it’s n by p
if you wanna stretch or compress it well that’s fine by me, but
we’ll see, what the data can reveal real soon
just compute the singular vectors and their values too

it’s never invertible but that’s not really the worst
i’ll pull an mtm on it and hit that pseudoinverse
finding a line of best fit with no questions
and no stressin, estimate it with linear regression

scale you in your eigensp-ce, put you in your eigenplace
smack you backwards when i’m sick of looking at your eigenface
with a big empty matrix of data that wasn’t done
got 99 columns but you’re still rank one

so, this one goes out, to all of my haters
overfitting their models on validating their training data
cuz my cl-ssifier smoking you leaving only the vapors
so don’t be messin’ with me, or my k nearest neighbors

كلمات أغنية عشوائية

كلمات الأغاني الشهيرة

Loading...