Hey there — welcome to the first edition of Is this for real? Every other week, I’ll dig into one science or tech topic that’s been on my mind and give you my take on it. Then, I’ll switch gears and share a few pieces of content that I have come across recently and found fascinating. It’s the kind of stuff that made me stop and say, “Wait… is this for real?” Let’s get into it.

On the newsletter today…is Chat-GPT 5 for real?

If you missed the news, OpenAI announced its latest model, “ChatGPT-5,” this week. Execs and PR teams did not point to any single new capability, only that it is supposedly better overall. If this is the revolution, it feels like a pretty polite one.

I watched the 1+ hour youtube live video where they unveiled the new model so you don’t have to and here are some of the highlights

First, hallucinations. These are the moments when a large language model confidently lies to you — inventing a statistic, conjuring a fact out of thin air, or citing a webpage that doesn’t exist. GPT-5, we’re told, will do that less. Yay? Honestly, it’s wild watching executives from a multibillion-dollar company stand on stage and brag that their flagship product will now lie to you… slightly less often. And if you believe their own graphs, it might not even be lying less at all. (See the X post below for the receipts.)

Second, software on demand. Sam Altman has described GPT-5 as ushering in an era of software on demand, where you don’t need to know how to code. All you need is an idea, and the model will build it for you. I love that vision because implementing great ideas should not be limited to people who can program. But it’s important to recognize the limitations. If you don’t understand what ChatGPT is doing or writing, troubleshooting becomes incredibly difficult. Sharing or collaborating on those creations is even harder, which makes real-world applications limited for now. As far as I know, no investor is lining up to fund a startup whose only asset is a list of great app ideas without anyone who can actually code them. Still, while today’s limitations are real, I can see this capability improving significantly in the future.

Third, AI-run labs. In an interview with Cleo Abrams, Sam Altman said

I want GPT-8 to go cure a particular cancer. And I would like GPT-8 to go off and think and then say, “Okay, I read everything I could find. I have these ideas. I need you to go get a lab technician to run these nine experiments and tell me what you find for each of them. And wait two months for the cells to do their thing.”

Send the results back to GPT-8. Say, “I tried that. Here you go.” Think, think, think. Say, “Okay, I just need one more experiment. That was a surprise. Run one more experiment.” Give it back. GPT says, “Okay, go synthesize this molecule and try mouse studies or whatever.” Okay, that was good. Try human studies. Okay, great, it worked. “Here’s how to run it through the FDA.”

As a cancer researcher, nothing would make me happier than having a tool that ensures every experiment we run is the right one. But science is not a perfectly ordered sequence of steps. It’s about navigating uncertainty, weighing conflicting evidence, and sometimes choosing a path when the data is incomplete or unclear. The vision in the quote above — a flawless chain of experiments leading neatly to a cure — leaves no room for the messy, intuitive decision-making that drives real breakthroughs. These models are trained on all the data that exists, but the best ideas are often the bold ones, the ones that don’t make sense at first.

On a positive note, I do like that I no longer have to go through a model picker in the web interface. ChatGPT now chooses the model automatically, which saves time and keeps the experience simpler for the user.

To conclude, whether we like it or not, these models are already shaping the real world. Only time will tell if we’re living through the kind of industrial revolution moment so are predicting. Check back in 20 years — I’ll either be proven right, or I’ll be writing this from the lab desk my AI boss assigned me.

Here is what I am into:

What I’m rewatching
I’m back on Stranger Things in anticipation of the fifth and final season arriving in November. I’m pairing it with the Streaming Things podcast, which dedicates an entire episode to each episode of the show. Sometimes the podcast is even longer than the episode itself. It’s the best. Highly recommend.

What I’m listening to
Nine years after first hearing it, I still have Hamilton on repeat. The recent TikTok trend where people reenact the scene of Hamilton leaving for the duel is not helping me move on.

What left me speechless
Tech giants were unknowingly hiring North Korean workers? And then there is “laptop farming.” This Bloomberg Businessweek article has the details.

An interview that stayed with me
Fareed Zakaria’s appearance on FLAGRANT was fantastic. The way he breaks down complex ideas and policies is masterful, and the conversation was sharp from start to finish.

What made me cry
Watching Tottenham fans say goodbye to Son Heung-min. Sonny has been through so much with the club, and while I am not a Spurs fan, seeing him leave the team that shaped him and that he shaped in return was emotional. This article from The Athletic puts it all in perspective.

What is keeping me up
I just finished reading Every Scientific Empire Comes to an End in The Atlantic, and I cannot stop thinking about it. It compares the fall of Soviet science to signs that American research may be headed in the same direction, raising the question of whether the United States is close to losing its scientific leadership.

Thanks for joining me for the first edition. My goal is to make this a biweekly tradition, and I’d love to hear your thoughts. You can reach me at [email protected]

Yours truly,
Gad

Keep reading

No posts found