This Research Paper allowed me to 10x my salary
This paper was discussed during my interview at Goldman. This completely changed my life and I used it in my first project as well.
This research paper changed my fucking life.
https://arxiv.org/pdf/1506.03134
D0nkey05
Stealth
4 months ago
steppenwolf
Stealth
4 months ago
Discover More
Curated from across
Data Scientists on
by Gooner7
Goldman Sachs
This Research Paper changed my life forever.
It was one of the papers that was discussed in my interview at Goldman. I came to know about this research paper a few years back after consulting a friend doing an ML PhD at University of Maryland, College Park. The explanation of the paper: 1. Initialize the neural network with small random values typically (-0.1,0.1) to avoid symmetry issues. 2. Now get ready to do Forward propagation: you pass thetraining data through the multilayer perceptron and compute the output. For each neuron in the MLP, calculate the weighted sum of its inputs and apply the activation function. (my favourite is tanh for LSTM applications) 3. Now compute the loss using a loss function like mean squared error, between output computed and the actual value. 4. Now get ready to do backpropagation, where you need to calculate the gradient of the loss function with respect to each weight by propagating the error backward through the network. 5. So, compute partial derivatives of the loss with respect to each weight, starting from the output layer and moving back to the input layer. 6. Here is the fun part: update the weights using the gradients obtained from the backward pass. here people usually use adam optimizer, which allows for accelerated stochastic gradient descent. Fun trivia: Adam stands for "Adaptive Moment Estimation". 7. Now repeat the forward and backward propagation process for numerous tries until theperformance of the model stabilizes.
https://www.iro.umontreal.ca/~vincentp/ift3395/lectures/backprop_old.pdf
Data Scientists on
by Gooner7
Goldman Sachs
This book helped me crack my ML interview at Goldman Sachs...
I have been getting great response on my posts lately so decided to share one of the most important books on Optimization algorithms that helped me crack my interview at Goldman. Mykel J. Kochenderfer and Tim A. Wheeler have done a greta job explaining some of the toughest concepts in Mathematical optimization and they have implemented the code in Julia which is super fun to code along. Next post on AI will come when this post gets 50 likes. Sharing the link to the book here, it covers some of the most foundational topics in optimization.
This book provides a broad introduction to optimization with a focus on practical algorithms for the design of engineering systems. We cover...
https://algorithmsbook.com/optimization/files/optimization.pdf