I am a senior computer science student interested in web application development and machine learning. I wasn’t interested in machine learning until I watched couple videos about machine learning and deep learning. Then, I decided to learn this thing not just because it is the future but also because it is so much fun. I am enjoying learning new things and sharing what I learn with my friends (like you), and I like to try using what I learnt in the real life.
“Knowledge is more valuable when you share it and use it.”
To learn more about this field I am using different methods, I am planning to write a blog post about the ways I am using to learn, hopefully I can publish it in couple weeks. Follow my blog page to keep track.
One of the methods I am using is reading posts from different blog pages, and my favorite is medium. Today I am going to summarize a post I’ve read and found it helpful. It is written by Nityesh, and title of the post is “Getting Started with reading Deep Learning Research papers: The Why and the How.”
Let’s Get Started
How do you become “self-sufficient” so that you don’t have to rely on someone else to break down the latest breakthrough in the field?
Short Answer: You have to read research papers to be up to date
Andrew Ng is founder of Google Brain and former head of Baidu AI group. In Quora discussion website, he answered to a question about Machine Learning. He said that after you complete some ML related courses, to go further you have to read research papers. Even better, try to replicate the results in the papers by yourself.
Dario Amodei is a researcher at OpenAI. He says that to test your fit for working in AI safety or ML, just trying implementing a lots of models very quickly. Find an Machine Learning model from a recent paper implement it, and try to get it work.
In summary, the only way to keep up with the pace is to read research papers as they are released to public.
“Nothing makes you feel stupid quite like reading a scientific journal article”
Arxiv.org is a place where researchers publish their papers before they are actually published in reputable scientific journals or conferences.
Why are they publishing them to public this fast, the answer is easy; because publishing in a scientific journal takes a lot of time, and progress. For a growing field like machine learning, couple years is really long time, so they are publishing their papers in websites like Arxiv.org to quickly disseminate their research and get feedbacks on it.
Arxiv Sanity Preserver
Another one is Arxiv Sanity Preserver. It is built by Andrej Karpathy, who works as director of AI at Tesla.
Arxiv Sanity Preserver does to Arxiv, what Twitter’s newsfeed does to Twitter. It lets you see only the topics that you are interested among all other papers. It makes it more personalized feed. Check this short video to learn more about Arxiv Sanity.
WAYR on Reddit
Third source I can suggest is WAYR thread on Reddit. WAYR stands for What Are You Reading. It is a thread on the subreddit Machine Learning where people share the parts they found interesting from the paper they read.
This is a good way to be on track with the updates happening in the field you are interested. Newsletters are usually picked according to top trends so they are usually worth reading topics. I highly recommend newsletters. I will list some of them that you might find helpful;
- Import AI by Jack Clark
- Machine Learnings by Sam DeBrule
- Nathan.ai by Nathan Benaich
- The Wild Week in AI by Denny Britz
Another way to keep up is to follow researchers and developers that are experts in their fields. Twitter platform is one of the good one to do this. I will list people that you might find helpful to follow;
- Michael Nielsen
- Andrej Karpathy
- Francois Chollet
- Yann LeCun
- Chris Olah
- Jack Clark
- Ian Goodfellow
- Jeff Dean
Thank you for reading this far, I hope it was a helpful post.
Don’t forget to subscribe to see more posts like this 🙂