This week, I’ve been building out my newest (and biggest) project yet, JBR. You probably already recognize that name, since I’ve been folding all my work under the JBR umbrella already. Well, now it officially has a mind of its own. Read on for more details!
🔨What I’ve Been Making
New writing: Two new blog posts on the site: State of the Union 2022 (which really is just a reupload of the video from last biweek) and The End of Gender. On the Ko-fi page I posted a brief introduction to the new JBR project.
New video/audio: One new podcast episode, Emotional Equivalent Salary, with another coming later today. Not much new on the video front for right now, but expect a new video on the main astukari channel coming (relatively) soon.
Other projects: JBR, the holdings company for Apalla and Astukari, now has its own page! The primary goal of JBR will be to help get creatives the resources and support to build a following and gain financial independence. Right now our main focus is building a community on Discord — you can join it by clicking the button below!
📚What I’ve Been Reading
What Should You Work On? - This Perell essay goes over both structure seeking versus structure averse, as well as the prestige heuristic — ideas that, unsurprisingly, often tie into one another. A big problem is that smart people who are structure-averse go into prestige jobs because of either peer pressure or goal ambiguity, only to realize too late that, no, they did NOT want to be an investment banker after all. Perell goes over some solutions to this problem here.
A call to build models like we build open-source software - Here’s an interesting thought: if we want to accelerate the innovation rate in machine learning, why not just open source it? To be fair, we already do this relatively often — but many “open-source” datasets are way out-of-date, and the same goes for the algorithms. So what if we just immediately open-sourced everything? There are obviously some bureaucratic reasons why this wouldn’t work in practice, but there’s also plenty of evidence of open-source model knowledge greatly improving the acceleration of a field.
The Economics of Pinball - This mostly deals with the economics of incentives, and is a rather fascinating look at how the pinballs of olde used to optimize customer spend by offering them *just* enough free plays. Reminds me of something in modern games…
The business of extracting knowledge from academic publications - More on the topic of optimizing for innovation-rate: many businesses have been created which aggregate lessons learned from various academic papers. Research on these businesses has found that, unfortunately, they don’t actually do that good of a job at helping us find new or underrepresented solutions. However, an interesting side note is that they do help us get caught up-to-speed on a topic faster. So, in a roundabout way, they do in fact help the innovation-rate; just not in the way we were expecting.
Attacking NLP systems with adversarial examples - NLPs, for those unaware, are ML products that create written content automatically. They already have a practical application in automated copywriting, content marketing, and video games. However, many of these algorithms send info back to the “main circuit” for training purposes. Because of that, say you give an algorithm an input that is so confusing and strange that it breaks down the entire algorithm. That break isn’t just affecting you — it’s affecting everyone who uses that algorithm. Because of that, hackers have been working on ways to break NLPs solely via crafty inputs. This post goes over some examples of these.