Searching for style
It’s interesting to watch young children use home assistants. They treat them like people, asking questions like “Alexa, how are you feeling?” and “Alexa, where are you?”. I often wonder what their motivation is. Maybe they don’t know the responses are scripted. Maybe the designers’ choice of which response to script is just as interesting to them as a human’s response would be.
Adults are no less inclined to anthropomorphization — Alexa is marketed as a her and not an it — but they’ve grown used to Alexa’s shortcomings as a conversationalist.
Why do we use cross entropy and mean squared error?
A wise man once told me that inexperienced engineers tend to undervalue simplicity. Since I’m not wise myself, I don’t know whether this is true, but the ideas which show up in many different contexts do seem to be very simple. This post is about two of the most widely used ideas in deep learning, the mean squared error loss and the cross entropy loss, and how in a certain sense they’re the simplest possible approaches.
Let’s start with the mean squared error, which is used when you want to predict a continuous value.
TabNine's first month in review
2020 Jacob here. This post was written back in 2018. Since this post, I added deep learning completions and sold the company. Now it’s in the capable hands of Codota.
It’s been roughly a month since TabNine launched on Hacker News and Reddit. I want to talk about how TabNine has been doing and share my plans for its future.
Pre-release development First, some background. I’m an undergraduate at the University of Waterloo. I started working on TabNine in February 2018, in my spare time while working as an intern at Jane Street.