So I've been following through a online course in machine learning offered by Stanford university. I have been recently reading up on logistic regression and stochastic gradient ascent. Here is a link to the original notes: http://cs229.stanford.edu/notes/cs229-notes1.pdf (pages 16-19).(adsbygoogle = window.adsbygoogle || []).push({});

Here is my code. The issue I'm having is that the log likelihood function seems to be constantly decreasing (becoming more negative) rather than increasing after every change in theta. The value for alpha (learning rate) I'm using is 0.0001 which is small, however anything larger will produce nan values as output from the log likelihood function. I spent quite some time playing around with this value and looking over my code and I can't seem to figure out whats going wrong.

Code (Text):

import numpy as np

def logregg(X, y, alpha, iterations):

#output parameter vector

(m, n) = np.shape(X)

theta = np.array([0] * n, 'float')

lin = np.dot(X, theta)

hyp = 1./(1 + np.exp(-1*lin))

i = 0

#stochastic gradient ascent

while i < iterations:

for j in range(m):

theta = theta + alpha*(y[j] - hyp[j])*X[j,:]

print loglik(X, y, theta)

i+=1

return theta

def loglik(X, y, theta):

lin = np.dot(X, theta)

hyp = 1./(1 + np.exp(-1 * lin))

first = y*np.log(hyp)

second = (1-y)*np.log(1-hyp)

return np.sum(first + second)

**Physics Forums - The Fusion of Science and Community**

The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

# Logistic regression: Stochastic Gradient Ascent (in python)

Loading...

Similar Threads - Logistic regression Stochastic | Date |
---|---|

Linear Regression, etc : Standard vs ML Techniques | Dec 27, 2017 |

Python Polynomial Regression with Scikit-learn | Dec 1, 2017 |

Linear Regression Error on Excel | Apr 28, 2014 |

Question about creating a regression model | Dec 6, 2011 |

**Physics Forums - The Fusion of Science and Community**