I got my first new car in 12 years, a 2017 Audi A3. Happily I was able to find one of the few A3s that has Driver Assistance, the fancy adaptive cruise control and lane holding system. Love it, so glad I got it. The feature is more common on the high end Audis, but for the A3 you have to get the “Prestige” trim level which is not commonly stocked by California dealers.
The simple part of the system is adaptive cruise control. I set my speed to the nearest 2.5mph, then it paces the car in front of me using radar sensors. You can select how close it wants to follow. It will bring the car to a full stop if it has to. It’s great in heavy traffic on I-80, the only drawback is I’m now less aggressive about switching lanes to get around someone slow. If only every car had this feature, we could smooth out a lot of traffic jams as everyone drives a constant speed.
The other fancy feature is active lane assist. The car detects highway lanes with cameras. If I start to drift out of the lane it gives a bit of a nudge to the wheel. Ostensibly it’s to remind me to hold my lane, but the nudge is strong enough it actually sends the car back in the lane by itself. It’s very much not an autopilot though, the car complains after ~10 seconds of nudges. And the sensor isn’t reliable in the face of bad paint or unusually wide lanes, you really can’t rely on it all the time.
I like how both technologies are like little daemons helping me drive. I’ve written before about the dangers of full autopilots that expect a driver to take over if something fails. The A3 systems aren’t full autopilots, I’m still engaged in the task of driving at all times. Although it does require less attention. I’m still learning to trust the daemons, sometimes when the lane holding feature moves the wheel I instinctively try to countersteer away, the exact wrong thing.
All the other electronics in the A3 are very nice too. The virtual cockpit display is beautiful. The maps are good. The stereo plays plenty of audio formats, although the 10,000 file limit on SD cards is awfully dumb. I’m even liking Apple CarPlay.
I’m hoping the next car I buy will have a full autopilot. Although once that tech reaches mainstream it may no longer make sense to buy a car.
I went to Reed College, a wonderful small liberal arts college. It was a perfect fit for me in almost every way. Except one thing: Reed offered no computer science. Excellent math and physics program in the liberal arts tradition, but no engineering of any kind. I was fine with that tradeoff at first but got frustrated, even considered transferring to MIT.
What made Reed work for me was a tiny little computer lab tucked in the library basement, the grandly named Academic Software Development Laboratory. That was the home for a few beardy Unix nerds, some students, some staff. Gary Schlickeiser was in charge at the time (Richard Crandall set it up). Gary hired me and I spent the next four years getting paid part time and summers to learn Unix at the knee of folks like Bill Trost and Kent Black. Our official job was writing software for professors’ research projects and providing Unix support, but really my time was spent being steeped in Internet culture. Also a lot of Netrek.
My very first job was getting Netatalk working on our Ultrix 2.2 systems so they could be file servers to Macintoshes. Mind you, this is 1990, networking software back then was full of jaggy sticks and sharp rocks. I learned how to download software via UUCP, how zcat | tar worked, how to run make and read compiler errors, all sorts of wooly crap. I got it running but it didn’t work, at which point Norman Goetz taught me how to use some ancient packet sniffer (Lanalyzer?) to figure out the problem. That’s when I learned about little-endian vs big-endian and in the end all I had to do was #define MIPSEL and suddenly it all worked. That was my first month’s accomplishment.
And so I was initiated into the Unix priesthood. Ever since then I’ve traded on my ability to write software and make computer systems work. Software is not an academic discipline, certainly not a liberal art. It’s a craft. And the only way to learn craftsmanship is to apprentice to master craftsmen, to learn hands on from experts.
The D-Lab was the home for that expertise. Later I worked on more interesting projects including Mark Bedau’s artificial life research, running a Usenet daemon, setting up Reed’s first web site, etc. Those projects led directly to my career.
Reed stopped having a D-Lab around ten years ago. But two years ago a new program started, the Software Design Studio, with enthusiastic support from some alumni. Reed is also creating a computer science program that will be pretty math intensive. I hope the SDS is a place where folks can learn some of the applied craft.
The Internet mostly survived the leap second two days ago. I’ve seen three confirmed problems. Cloudflare DNS had degraded service; they have an excellent postmortem. Some Cisco routers crashed. And about 10% of NTP pool servers failed to process the leap second correctly.
We’ve had a leap second roughly every two years. They often cause havoc. The big problem was in 2012 when a bunch of Java and MySQL servers died because of a Linux kernel bug. Linux kernels died in 2009 too. There are presumably a lot of smaller user application failures too, most unnoticed. Leap second bugs will keep reoccurring. Partly because no one thinks to test their systems carefully against weird and rare events. But also time is complicated.
Cloudflare blamed a bug in their code that assumed time never runs backwards. But the real problem is POSIX defines a day as containing exactly 86,400 seconds. But every 700 days or so that’s not true and a lot of systems jump time backwards one second to squeeze in the leap second. Time shouldn’t run backwards in a leap second, it’s just a bad kludge. There are some other options available, like the leap smear used by Google. The drawback is your clock is off by as much as 500ms during that day.
The NTP pool problem is particularly galling; NTP is a service whose sole purpose is telling time. Some of the pool servers are running openntpd which does not handle leap seconds. IMHO those servers aren’t suitable for public use. Not clear what else went wrong but leap second handling has been awkward for years and isn’t getting better.
I ran into an awkward problem in Europe; I couldn’t get SMS messages. It’s a design flaw in Apple’s handling of text messages, its favoring of iMessage over SMS. If you turn data roaming off on your phone when travelling, you may not be able to get text messages reliably.
If you have an iPhone suitably logged in to Apple’s cloud services, other iPhones (and Apple stuff in general) will prefer to deliver text messages via iMessage instead of SMS. You see this in the phone UI: the messages are blue, not green. In general iMessage is a good thing. It’s cheaper and has more features.
The problem is Apple’s iMessage delivery requires the receiving phone have an Internet connection via WiFi or cellular data. If you have no WiFi at the moment and have data roaming turned off, your phone is offline. And so Apple can’t deliver to you via iMessage. They seem to buffer sent messages for when you come back online. Which is too bad, because your phone could still receive the message via SMS. Unfortunately iMessage doesn’t have an SMS delivery fallback.
In practice this design flaw meant I had to leave data roaming turned on all the time because I needed to reliably get messages from another iPhone user. Which then cost me about $30 in uncontrollable data fees from “System Services”. Some $15 was spent by Google Photos spamming location lookups (a bug?), another $15 receiving some photo iMessages from a well-meaning friend. Admittedly the SMS fallback I’d prefer would also cost some money, but I think significantly less in my case.
There’s a broader problem with iMessage which is that once a phone number is registered with it, iPhones forever more will not send SMS to that number. Apple got sued over this, so now they have a way to deregister your number.
The world has had its first self-driving car fatality: a Tesla autopilot failed. So far the world hasn’t freaked out. I think self-driving cars will be way safer than human-driven cars. But there’s a lot of shaping the truth in Tesla’s announcement.
(Fair warning: this blog post is uninformed hot take territory. I’m reacting to Tesla’s description of the crash, published two months after the death. We’ll know a lot more after an independent investigation.)
Tesla’s press release is masterful. It characterizes the cause of the accident like this:
the vehicle was on a divided highway with Autopilot engaged when a tractor trailer drove across the highway perpendicular to the Model S. Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied.A truck pulled out in front of the car on the highway. It may well have been an unavoidable accident. We’ll know eventually.
But note the facility of claiming the “driver” didn’t notice the truck. How do we know that? The man is dead, we have no idea what he saw. I don't know about you, but I've never once failed to spot a white truck against a bright sky, particularly when I'm driving towards it at 70mph. I could see how a computer vision system would fail that test though.
“The brake was not applied”. It takes time to apply the brakes after you see your death coming at you. Doubly so if you’re not actually driving. The passenger-behind-the-wheel was almost certainly not having his foot hovering gently near the accelerator / brake like an engaged driver would. That slows reaction time. I do this all the time with my simple cruise control and it scares the hell out of me when some slow jerk pulls in front of me and I don’t react quickly.
(I also admire the comfort of “he never saw it coming”. Sort of takes the sting out of the next sentence, which describes the unfortunate’s grisly decapitation.)
The real problem here is Tesla’s autopilot is a half measure, “driver assist”. It doesn’t fully drive the car. This design is the most dangerous of all worlds. I had this experience with my airplane’s autopilot all the time. At some point when the automation does enough work, you can’t help but check out mentally, let the machine take over. But if the machine isn’t capable of taking over entirely you can end up dead.
That’s why I’m in favor of fully autonomous vehicles. No steering wheel, no accelerator, maybe just a single brake or other emergency cutout. Of course in this situation the software has to work reliably. Let's say a fatality rate of 50% of human drivers. And insurance and the law have to adapt to this shift of control to software. I believe the technology nerds are very close to having systems that can fully drive a car with no “driver assist” ever needed, at least in clear weather. It will be a better future. And those robot cars will kill some of their passengers. Far fewer than humans are killing now.
Discord is good software. It’s a sort of Slack clone aimed at the gamer market, with the marquee feature being group voice chat. But the non-voice features work well too and there’s no reason the product has to only be used by gamers. It’s particularly interesting because Slack is clear that it is an enterprise product. All those free-tier Slacks of 100s of people don’t work very well. Discord could capture the consumer market.
Discord works well and is free. The browser client, desktop client, and mobile clients are all solid and reliable. The voice chat is good quality. The login model works better than Slack if you are a member of multiple communities. It’s very easy to get Discord up and running as a Slack replacement and as a Teamspeak / Skype replacement for voice chat.
But the product still has some rough edges. The typography and design are not as beautiful as Slack. There’s no reacji, no custom emoji support. The API is not yet gelled, although the unofficial stuff works great. Discord is also not an enterprise product; there’s no message search, little file sharing support, fewer administrative features. But it’s a very good free consumer product.
Speaking of free, so far Discord hasn’t monetized. They say the core functions will always be free and they will sell “optional cosmetics like themes, sticker packs, and sound packs”. I’m a little skeptical that’s going to be enough but I appreciate they’ve at least not talked about ads (yet).
The company has $30M in venture funding from top tier investors. It was founded by the team that built OpenFeint, the iPhone gaming social system that Apple destroyed when it launched its terrible GameCenter product. I’m excited that this team is building something like Slack, but for consumers instead of companies.
What’s nice about Hover is it’s no bullshit. It’s a simple registrar with simple DNS service. And excellent support with questions answered by real, thinking humans. They’re not the fanciest registrar. They don’t offer all the TLDs in the world, their DNS services are limited, they’re not the cheapest. But they are simple and trustworthy. In a business as scammy as domain names it’s nice to buy service from someone decent.
I just had a terrific experience where I asked them why there’s no whois privacy offered on one of the new novelty TLDs. I’d seen a few domains registered there with hidden whois data but Hover wouldn’t do it for me. We went back and forth a few times and he finally explained that the TLD’s policy didn’t allow for whois privacy, but that other registrars might do it anyway and that if I really wanted whois privacy I should use them instead. I appreciated the frank answer.
This description of the Brave browser sounds like an unethical business. Brave markets itself as making a safer and faster Web by blocking ads. I’m all in favor of blocking ads. But Brave also replaces ads with its own and then only gives about half of the revenue to the content publisher. That seems wrong to me.
I don’t like ads. Blocking ads is good: it stops the intrusion into my mind and makes for a technically better Internet experience. Replacing ads is not good. Seeing different ads does not help me keep my mind clear. And substituting one ad server with another does not significantly improve my Internet experience, even if the ad company pinkie-swears its ads are technically better.
But the real problem is that ad replacement is siphoning revenue from content producers. I’m OK with denying content producers revenue entirely, it’s a shame but Internet ads are odious enough in 2016 I think it’s necessary. But a third party interjecting itself to siphon off half the revenue is wrong.
The situation is even uglier with ad blocking extensions. AdBlock Plus skims 30% of ad revenue to let Google, Microsoft, and Amazon ads slip through their blocker. That sounds like pure extortion to me, bad for the ad networks and bad for the end users. A similar racket is developing in mobile ad blockers. These businesses are unethical.
We went through an era in the 2000s with ISPs and DNS services injecting their own ads into web pages. They claimed for a year or two it was OK and better for users, until legal action (and SSL) stopped them. Let’s not reproduce that experience with software vendors.
This is outrageous. I install software on my computer to block ads, a clear statement of user preference. The Economist colludes with PageFair to ignore my choice, to run software on my computer that I explicitly don’t want. That software they run turns out to be installing malware.
The folks who write things like PageFair need to be sued into oblivion. Not just the company; stop the people who built this abusive technology from ever creating software again.
Machine learning is becoming a mainstream technology any journeyman software engineer can apply. We expect engineers to know how to take an average and standard deviation of data. Perhaps it’s now reasonable to expect a non-expert to be able to train a learning model to predict data, or apply PCA or k-means clustering to better understand data.
The key change that’s enabling high end machine learning like Siri or self driving cars is the availability of very large computing clusters. Machine learning works better the more data you have, so being able to easily harness 10,000 CPUs to process a petabyte of data really makes a difference. For us civilians with fewer resources, libraries like scikit-learn and cloud services make it possible for us to, say, train up a neural network without knowing much about the details of backpropagation.
The danger of inexpert machine learning is misapplication. The algorithms are complex to tune and apply well. A particular worry is overfitting, where it looks like your system is predicting the data well but has really learned the training data too precisely and it won’t generalize well. Being able to measure and improve machine learning systems is an art that I suspect can only be learned with lots of practice.
I just finished an online machine learning course that was my first formal introduction. It was pretty good and worth my time, you can see my detailed blog posts if you want to know a lot more about the class. Now I’m working on applying what I’ve learned to real data, mostly using IPython and scikit-learn. It’s challenging to get good results, but it’s also fun and productive.