Profile cover photo
Profile photo
Jarek Wilkiewicz
7,039 followers -
ML @ 10^100
ML @ 10^100

7,039 followers
About
Jarek's posts

Post has shared content
In case you haven't seen it, the Distill Journal (http://distill.pub), created and edited by Google Brain team members +Christopher Olah and +Shan Carter, launched earlier this week. It's a totally new kind of journal and presentation style for machine learning research: online, interactive, and encouraging of lucid and clear presentations of research that go beyond using a Gutenberg-era presentation medium for ML topics. I'm really excited about this, and Chris and Shan have put a ton of work into this. The reactions so far have been quite amazing and positive

"Yes. Yes. Yes. A million times yes. I can't count how many times I've invested meaningful time and effort to grok the key ideas and intuition of a new AI/DL/ML paper, only to feel that those ideas and intuitions could have been explained much better, less formally, with a couple of napkin diagrams.... I LOVE what Olah, Carter et al are trying to do here." (Hacker News)

"I really love this effort. Research papers are low bandwidth way to get information into our brains..." (Hacker News)

"finally, someone gets it!! we need to COMMUNICATE research CLEARLY" (Twitter)

"My gosh, interactive dataviz is now the core of an academic journal. Thank you @shancarter & @ch402 & @distillpub!" (Twitter)

"This new machine learning journal is seriously exciting; an emphasis on clear explanation & interactive illustration" (Twitter)

"'Research Debt' - I am curious where @distillpub will go but I really like this essay by @ch402 & @shancarter" (Werner Vogels, CTO of Amazon, Twitter)

Blog posts announcing Distill:

Google Research: https://research.googleblog.com/2017/03/distill-supporting-clarity-in-machine.html

OpenAI: https://blog.openai.com/distill/

Y Combinator: https://blog.ycombinator.com/distill-an-interactive-visual-journal-for-machine-learning-research/

DeepMind: https://deepmind.com/blog/distill-communicating-science-machine-learning/

Chris Olah's blog: http://colah.github.io/posts/2017-03-Distill/



Post has attachment

Post has attachment
If you're looking for a challenge check out CMU's Master of Science in Software Management (Silicon Valley Campus) http://www.cmu.edu/integrated-innovation/degrees/mssm/index.html - the application deadline is June 1st.

https://twitter.com/CMUInnovation/status/844647452007419905

Post has attachment
The Essence of linear algebra by http://www.3blue1brown.com/ is a real joy to watch - highly recommended: https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab

Post has attachment
Fav book so far this year
Photo

Post has attachment
https://iamtrask.github.io/2017/03/17/safe-ai/

"[...] the network can only make encrypted predictions (which presumably have no impact on the outside world because the outside world cannot understand the predictions without a secret key). This creates a valuable power imbalance between a user and a superintelligence. If the AI is homomorphically encrypted, then from it's perspective, the entire outside world is also homomorphically encrypted. A human controls the secret key and has the option to either unlock the AI itself (releasing it on the world) or just individual predictions the AI makes (seems safer) [...]"


Post has attachment
Kepler-1647b
Photo

Post has attachment
Flight
Photo

Post has shared content
Speed is everything for effective ML. That's why we developed XLA, a compiler for @TensorFlow! Read more https://goo.gl/QcjgxY

Post has attachment
Wait while more posts are being loaded