I felt physically ill reading this story about Facebook’s facilitation of murder and slavery.
... reports from employees who are studying the use of Facebook around the world, including human exploitation and other abuses of the platform. They write about their embarrassment and frustration, citing decisions that allow users to post videos of murders, incitements to violence, government threats against pro-democracy campaigners and advertisements for human trafficking.
There’s been plenty of stories about Facebook’s malfeasance over the years. Two things make this story particularly awful. One is the human scale of the harm they cause. It’s not some abstract discussion about political influence, it’s personal examples like a woman named Patricia being recruited and sold into slavery. The other is that Facebook knows about the problems and is choosing not to act. Or at least not act enough to be meaningful.
Facebook treats harm in developing countries as “simply the cost of doing business” in those places.
I had enough. I rage quit Facebook Thursday. Or at least tried to.
The problem is I’m a captive of Facebook. Because despite all the horrors it’s still a good semi-private way to keep in touch with people. It’s my primary social connection to the small gay community in Nevada County, for instance, including a group that organizes weekly meetups. Also it’s helped me reconnect with old high school friends. I’m well aware of the hundreds of other social media tools I could ask them to use, I helped design some of them. But the reality is that a lot of community happens on Facebook and if you don’t participate there, you miss out.
I don’t know what I’m going to do with Facebook. I was going to delete my account but feel I can’t. I’m trying to stay logged out but I already feel the need to connect occasionally, even if only to arrange social connections outside of Facebook. I’m one of those extremely online people, I can’t just disconnect entirely. But I’m being forced to visit the house of a psychopath.
This story this week is one in a series of investigative articles by the WSJ. Some Facebook employees have been trying to lessen the harm their company is doing and they’re tired of being ignored. So they’re talking to reporters, particularly Jeff Horwitz. As a collection the reporting is incredibly damning. Lying to their oversight board, ignoring mental harm to young women, amplifying anger and lies, sabotaging American vaccination efforts, and helping the business of murderers and slavers.
Facebook is in some ways just reflecting the larger evils of society. I’ve worked on social media policy. I understand the difficulty of moderating conversations. But as a medium Facebook is a very efficient amplifier of evil; its existence uniquely enables things like the genocide of the Rohingya. That creates an obligation on Facebook to mitigate the harmful uses of their product. They have failed to that. Maybe the only remedy is to stop them from operating.
Starlink is good technology. I’m posting this blog entry from space. By which I mean Starlink, SpaceX’s new LEO satellite Internet service. I’ve been beta testing it for six months and using it exclusively for two. It is terrific. Some notes from a United States perspective.
Does it work well? Hell yeah! It’s more like having cable Internet than satellite. Over the last month my average bandwidth has been 100Mbits/s down, 12Mb/s up. Average ping to 126.96.36.199 of 41ms with 0.5% packet loss. Bandwidth is highly variable (50-200Mbps) but latency is pretty solidly 30-50ms. The main failure is occasional outages of ~10 seconds. That got a lot better mid-July with a Starlink change and will only get better as they launch more satellites. The service is still technically beta and there are some rough patches but it’s totally usable.
Is Starlink for you? Maybe. If you can get wired service (cable, fiber, faster DSL) that is probably a better choice. If you’re in a poorly served rural area in Ajit Pai’s America and you’re struggling with ViaSat or Hughes or using cellular, definitely. For me it’s an upgrade to my 12/1 Mbps fixed wireless service.
Will Starlink work at your house? Probably! It requires a clear view to the north. The free Starlink mobile app has an augmented reality tool to show you whether you have a good view. A few small obstructions are OK but if you live in the middle of a bunch of trees you need to go higher on your roof or get a mast.
Can you get it? Probably not soon. They are enormously back-ordered; Starlink has a limited amount of bandwidth per satellite and they are slow to add new users. They just passed 100,000 installs globally and are rumored to have 500,000+ customers on the waiting list. The best thing you can do is pre-order and put down a $99 deposit. It may be a year. (I got super lucky.)
Is it nerd friendly? Totally. You can use your own router; Starlink provides one but does not require it. Dishy has an open gRPC interface for getting detailed stats. The Internet service is quite solid and not messed with in any way I can tell. They sorta support IPv6 already and promise more. The main drawback is that (at least in IPv4) the service is cgNAT, you really can’t run a server behind Starlink in any reasonable way. The cgNAT is for good reason: they’re doing some very sophisticated routing, your packets may be relayed through several base stations hundreds of miles apart and it’s remarkable you have a stable external IP address at all.
Will Starlink succeed as a business? That’s hard to say. The program is still beta and currently has no bandwidth caps or significant throttles. And it’s $99/month: a lot by US urban ISP standards but competitive for rural areas. The problem is the satellites can only handle so many users and it seems too early to tell whether it works out to be profitable. Launching thousands of satellites is expensive but SpaceX are experts at that. Also crappy companies like ViaSat keep suing to stop Starlink because they can’t compete. Amazon is trying to make up for being years behind by trying to get the FCC to harass Starlink. The legal attacks seem to be failing so far in the US but you never know.
What’s next for Starlink? They launched their first full shell of satellites earlier this year, then took a pause. They seem to be fine-tuning algorithms and transceiver power settings right now. The next major change is a plan to use laser links so satellites can route packets directly (currently everything is relayed to the ground). It seems like a very hard problem but they are serious about doing it; ultimately Starlink might be better than wired service.
I’m not an Elon Musk fan but I have to say Starlink is amazing. And audacious; I never would have believed it would work (remember Iridium?). But it does work and it’s been a significant upgrade for me, so thank you SpaceX. The hilarious thing is the whole idea of Starlink apparently is about Mars; the project started as a way to design networking infrastructure for a colony on a new planet. Oh and then completely upended Earth’s ISP market as a sort of proof of concept.
My old 2014 server died so I’ve migrated all my personal projects over to a new one. If you’re reading this post everything’s working and you’re on the new server. I’ll have some notes about the migration over on my secret workblog soon. Nothing exciting, just upgrading to Ubuntu 18.
I’ve taken this shift to deprecate or remove some old projects. My live rivers vector tile demo is down for good. The GitHub repo with the tutorial lives on, but serving those tiles took a big stack of software that it wasn’t worth setting up again given no one was looking at it. I’m also winding down Logs of Lag, my League of Legends tool. Partly because its code had rotted and I didn’t have enough interest to update it, partly because I’m disgusted with Riot Games and don’t want to spend any time making free tools for them.
I’m reading The Planet Remade: How Geoengineering Could Change the World. It’s by an Economist journalist, a book about the taboo topic of engineering our way out of global warming. It starts with two questions. Do you think global warming is a real threat? Do you think reducing CO2 emissions to near zero is very hard?
If you answer "yes" to both, then maybe you’re open to an alternate solution to climate change: geoengineering. Active measures to combat global warming other than just telling everyone they have to stop using energy. Some of these methods seem plausible and inexpensive and worth discussing.
One example idea is stratospheric aerosol injection. You fly planes or balloons regularly up to 60,000 feet or so and spray sulfur dioxide. There it turns into various gasses that reflect sunlight, replicating more or less what volcanoes already do naturally. Estimates are it’d cost $2B–$8B per year to completely counteract global warming, a small amount of money. The intervention is temporary, the sulfurous particles don’t last forever. That’s a good thing in that if something goes catastrophically wrong it’s not permanent. Also a bad thing in that it’s an annual cost that has to be maintained. Note the cost and technique is simple enough any individual country could choose to do this unlaterally, at any time.
There’s other possible techniques for geoengineering. Chemical carbon capture, planting enormous forests, orbital sun shades, … But the topic is nearly taboo in environmental circles; a recent UN meeting failed to get consensus to even decide to discuss the idea further. If you believe global warming is a threat to humanity and that our current efforts to stop CO2 emissions are failing, it seems worth thinking about alternatives. Even if you don’t like the idea of geoengineering it’s worth studying in case some other country decides to just do it on their own.
Me, I like the idea. Because I’m a technocrat. I want to believe science and engineering can solve any problem. Also because it doesn’t require demanding that the 50%+ of the world that’s not yet industrialized goes years more without cheap power because we’re mad they’ll put as much carbon in the air as the rich countries already have. The risks are obvious and enormous. But they are also hopefully manageable and the risk is absolutely worth it if it saves our planet.
The Ceptics travel adapters are good hardware for American world travellers. They combine a USB charger with a passive plug adapter for AC. Carrying one of those is enough for me for a multi-week vacation, providing plenty of charging for my laptop, phone, tablet, etc. No need for extra chargers or adapters. They come in different models for European, Australian, UK, etc sockets. They also make a universal adapter kit but it’s a lot bulkier.
You’d think a passive AC adapter would be a simple thing but a lot of them suck. Ceptics’ solution makes for a plug that sockets reliably and firmly. And it has a reasonable workaround for polarization and universal 3 prong grounding. You may not get a true third wire for ground (ie, the Type C European plug is only 2 prong) but at least your laptop charger will physically plug in and function.
The Ceptics’ USB charging works well. USB charger quality varies greatly. I can’t really judge what’s inside the Ceptics charger except to say that it never gets warm and seems to provide the right magic signals to charge my iPad quickly.
It’s property tax time. So I went to the SF treasury website to pay my tax bill. And got an SSL certificate error from Firefox.
Oops the cert expired a year ago and is for the wrong domain. I get it, government web sites are often underfunded and don’t work well. Maybe they didn’t know how big a problem this presents in modern browsers that are enforcing SSL security. So I wrote a polite note to support. And got this response from the San Francisco 311 Customer Service Center.
Please use the right protocol to access our website. Please use http://sftreasurer.org instead of https://…
Simplenote is good software. It’s a very simple cross-platform note taker with excellent cloud synchronization between devices. It’s perfect for drafting a few paragraphs of text, keeping a simple to do list, or jotting down an address while you’re on the phone. Under the hood it’s got some remarkably sophisticated features like version history, note sharing, etc. But all that is out of the way if you just want a box to type in.
Simplenote is free software, a gift from the folks at Automattic. They’re mostly known for WordPress but they have a surprising number of other public good services they run, mostly for free or with value-add purchases. Akismet, Longreads, Gravatar, Cloudup; I had no idea these were all Automattic. Good for them.
I got my first new car in 12 years, a 2017 Audi A3. Happily I was able to find one of the few A3s that has Driver Assistance, the fancy adaptive cruise control and lane holding system. Love it, so glad I got it. The feature is more common on the high end Audis, but for the A3 you have to get the “Prestige” trim level which is not commonly stocked by California dealers.
The simple part of the system is adaptive cruise control. I set my speed to the nearest 2.5mph, then it paces the car in front of me using radar sensors. You can select how close it wants to follow. It will bring the car to a full stop if it has to. It’s great in heavy traffic on I-80, the only drawback is I’m now less aggressive about switching lanes to get around someone slow. If only every car had this feature, we could smooth out a lot of traffic jams as everyone drives a constant speed.
The other fancy feature is active lane assist. The car detects highway lanes with cameras. If I start to drift out of the lane it gives a bit of a nudge to the wheel. Ostensibly it’s to remind me to hold my lane, but the nudge is strong enough it actually sends the car back in the lane by itself. It’s very much not an autopilot though, the car complains after ~10 seconds of nudges. And the sensor isn’t reliable in the face of bad paint or unusually wide lanes, you really can’t rely on it all the time.
I like how both technologies are like little daemons helping me drive. I’ve written before about the dangers of full autopilots that expect a driver to take over if something fails. The A3 systems aren’t full autopilots, I’m still engaged in the task of driving at all times. Although it does require less attention. I’m still learning to trust the daemons, sometimes when the lane holding feature moves the wheel I instinctively try to countersteer away, the exact wrong thing.
All the other electronics in the A3 are very nice too. The virtual cockpit display is beautiful. The maps are good. The stereo plays plenty of audio formats, although the 10,000 file limit on SD cards is awfully dumb. I’m even liking Apple CarPlay.
I’m hoping the next car I buy will have a full autopilot. Although once that tech reaches mainstream it may no longer make sense to buy a car.
I went to Reed College, a wonderful small liberal arts college. It was a perfect fit for me in almost every way. Except one thing: Reed offered no computer science. Excellent math and physics program in the liberal arts tradition, but no engineering of any kind. I was fine with that tradeoff at first but got frustrated, even considered transferring to MIT.
What made Reed work for me was a tiny little computer lab tucked in the library basement, the grandly named Academic Software Development Laboratory. That was the home for a few beardy Unix nerds, some students, some staff. Gary Schlickeiser was in charge at the time (Richard Crandall set it up). Gary hired me and I spent the next four years getting paid part time and summers to learn Unix at the knee of folks like Bill Trost and Kent Black. Our official job was writing software for professors’ research projects and providing Unix support, but really my time was spent being steeped in Internet culture. Also a lot of Netrek.
My very first job was getting Netatalk working on our Ultrix 2.2 systems so they could be file servers to Macintoshes. Mind you, this is 1990, networking software back then was full of jaggy sticks and sharp rocks. I learned how to download software via UUCP, how zcat | tar worked, how to run make and read compiler errors, all sorts of wooly crap. I got it running but it didn’t work, at which point Norman Goetz taught me how to use some ancient packet sniffer (Lanalyzer?) to figure out the problem. That’s when I learned about little-endian vs big-endian and in the end all I had to do was #define MIPSEL and suddenly it all worked. That was my first month’s accomplishment.
And so I was initiated into the Unix priesthood. Ever since then I’ve traded on my ability to write software and make computer systems work. Software is not an academic discipline, certainly not a liberal art. It’s a craft. And the only way to learn craftsmanship is to apprentice to master craftsmen, to learn hands on from experts.
The D-Lab was the home for that expertise. Later I worked on more interesting projects including Mark Bedau’s artificial life research, running a Usenet daemon, setting up Reed’s first web site, etc. Those projects led directly to my career.
Reed stopped having a D-Lab around ten years ago. But two years ago a new program started, the Software Design Studio, with enthusiastic support from some alumni. Reed is also creating a computer science program that will be pretty math intensive. I hope the SDS is a place where folks can learn some of the applied craft.
The Internet mostly survived the leap second two days ago. I’ve seen three confirmed problems. Cloudflare DNS had degraded service; they have an excellent postmortem. Some Cisco routers crashed. And about 10% of NTP pool servers failed to process the leap second correctly.
We’ve had a leap second roughly every two years. They often cause havoc. The big problem was in 2012 when a bunch of Java and MySQL servers died because of a Linux kernel bug. Linux kernels died in 2009 too. There are presumably a lot of smaller user application failures too, most unnoticed. Leap second bugs will keep reoccurring. Partly because no one thinks to test their systems carefully against weird and rare events. But also time is complicated.
Cloudflare blamed a bug in their code that assumed time never runs backwards. But the real problem is POSIX defines a day as containing exactly 86,400 seconds. But every 700 days or so that’s not true and a lot of systems jump time backwards one second to squeeze in the leap second. Time shouldn’t run backwards in a leap second, it’s just a bad kludge. There are some other options available, like the leap smear used by Google. The drawback is your clock is off by as much as 500ms during that day.
The NTP pool problem is particularly galling; NTP is a service whose sole purpose is telling time. Some of the pool servers are running openntpd which does not handle leap seconds. IMHO those servers aren’t suitable for public use. Not clear what else went wrong but leap second handling has been awkward for years and isn’t getting better.