top of page

Advice for Living in A World Being Changed by AI

  • Spencer Greenberg
  • 3 hours ago
  • 15 min read
Click here to listenAdvice for Living in A World Being Changed by AI

Short of time? Read the key takeaways.

⚖️ AI is a profoundly mixed technology. Like nuclear energy, it offers major benefits (e.g., better information, education, creativity, healthcare) while also posing serious risks (e.g., misinformation, job loss, inequality, and potential existential threats).


🧠 Public perception of AI is distorted by both hype and skepticism. Many people underestimate current capabilities due to outdated experiences, bias, or backlash against hype, even as the technology continues to advance rapidly.


⚠️ AI has reached a meaningful inflection point. Spencer Greenberg argues that recent advances mark a qualitative shift, rendering AI far more powerful and impactful than many people realize, with large and uncertain consequences.


🧭 Individuals should adapt thoughtfully to an uncertain, fast-moving AI world. Given the above, Spencer offers some advice. If you're not ethically opposed, try experimenting with AI use; stay informed without overwhelming yourself; anticipate career impacts; consider AI for personal projects; and engage constructively with concerns about its risks by considering contributions to non-profits aiming to help protect the world from this technology's dangers and downsides.


Note: This is a slightly more opinionated piece than usual, for us. It contains the personal opinions of Clearer Thinking’s founder, Spencer Greenberg. We hope you find it to be helpful food for thought.


I don't want to be alarmist, but if you haven't been following AI closely, there's something very important I think you should know: recently, AI has turned a corner. What the future holds is unclear, but what is clear is that a lot of things we take for granted are going to change. Before I get to my reasons for claiming this, here's why I think this is such a big deal:


Like the splitting of the atom, which can be used to produce abundant energy or wage nuclear war, AI is an extremely mixed technology. It has the potential to make things substantially better, for instance, by making high-quality information even easier to find, by making education far more personalized, by making quality medical advice much more accessible, and by enabling people to create things they’ve always dreamed of but have never been able to before.


But at the same time, AI carries the potential to cause tremendous harm across many different axes - from misinformation and manipulation, to scams, to slop, to authoritarianism, to job loss, to energy and resource use, to increased inequality - and perhaps even to threatening human civilization. We recently investigated people’s level of concern about 16 of these risks in a study and explored possible existential threat from AI in a podcast episode and potential economic and job loss effects from AI in another episode, here. 


These issues - both the concerns about the technology, and its positive potential - are such a big deal that I think it's worth being aware of what's going on. We’re now in the era of AI, and it’s an era of great uncertainty. The error bars on its impacts are, in my view, incredibly large.


I know that what I’m saying here (if you believe it) can be nerve-racking for some and may create a lot of uncertainty. What can or should you do? It's tough to say, but I have a few pieces of advice, for whatever they are worth. The purpose of this article is to give you that advice. But first, here’s a little bit more about how I see the state of things.



There has been a change


Recently, AI has become a lot more powerful than many people realize or acknowledge.


I know that, to some people, me saying this will make them think I’m uncritically swallowing AI marketing hype, or that I’m just a shill for AI. I hope our previous work (e.g., discussing AI risks and rejecting the goodness binary) provides some evidence against that. Ultimately, if you feel skepticism or annoyance in response to what I say below, that’s fine, but I hope you’ll still consider my arguments. 


COVID-19 provides a helpful analogy here.


In the very early days of COVID, I felt a bit like I was going crazy due to my view of reality splintering from almost everyone else's. I was starting to be convinced it may become a global pandemic, and yet people weren't yet acting as though it was a big deal (even the stock market had only just started to react). In late February 2020, I decided to write publicly about it in this very newsletter, proposing that it was time to at least consider preparing for COVID. I got some serious pushback from people at the time, saying I was being alarmist (some of them angry). In retrospect, I am glad I posted it when I did, which is why I'm writing this now.


It's bizarre to me now, seeing so many takes about how AI technology has hit a wall, or isn't that impressive, or is just a “stochastic parrot”, or is "merely" next token prediction. For instance, I often see people laughing at AI making dumb mistakes and writing off the technology as useless. For instance, in this now-famous example from 2024, it suggests a less-than-helpful "method" for preventing cheese from sliding off pizza:



Sometimes these takes about the ineffectiveness of AI come from people who have only played with the free versions of AI offerings, or who haven't used the technology in more than 12 months. These versions are far inferior to the newest paid ones, and so while these people’s reactions are understandable, they don’t reflect the true state of the technology. Other times these takes seem to be motivated by intense worries about (or dislike of) AI or big tech. Of course, no matter how much you hate or worry about AI or tech companies, that doesn't entail anything about the power of the technology. 


Still other people form these skeptical takes on AI as a reaction to the over-claiming that happens - and yes, there's a lot of over-claiming and hype. But an over-claim about one aspect of a technology doesn't mean that another aspect isn't incredibly powerful and moving fast.


AI is in the bizarre state of being simultaneously one of the most overhyped and underhyped technologies. Cutting through what's hype and what's not can be challenging.


All of these skeptical takes about AI are distracting many people from the important reality. 


The best way I know of to make you aware of what's really going on is to tell you about what’s different now compared to (roughly) a year and a half ago. 


AI went…



1 ) ...from making simple mathematical mistakes that a high schooler might not make, to solving research math problems that at least a few serious mathematicians had attempted but not yet solved. AI is going to change the way mathematics is practiced (though it is, of course, not now a replacement for mathematicians). I’ve already been using it to check some moderately complex math problems that come up in my work, and it sometimes finds errors or important assumptions that I missed. This Atlantic article expresses a nuanced take from one of the world's greatest mathematicians, Terence Tao: 


“The first time we spoke, in the fall of 2024, Tao had likened chatbots to ‘mediocre, but not completely incompetent’ graduate students… But during our most recent conversation, he was more bullish. AI may not be on the cusp of solving all of the world’s great math problems, but chatbots are at the point where they can collaborate with human mathematicians... the technology is opening up a different ‘way of doing mathematics’ According to Tao: 'in the short term we’re going to get a lot of quick wins on easy problems from pure AI methods. And then over the next few months, I think we’re going to have all kinds of hybrid, human-AI contributions.’”

2 ) ...from producing photo-like and art-like images to being able to produce images that, when curated, the majority of people can't distinguish from real photographs or from art made by professional artists. Relatedly, I already know of people who have lost their graphic design jobs and been replaced by AI image generation.


In a test with 11,000 participants, conducted on the blog Astral Codex Ten, an equal number of human and AI-created images in four styles (e.g., Renaissance, Abstract/Modern) were shown, and participants had to say which were AI. The average score was barely better than random guessing, at just 61%. But this was practically using "ancient" technology - the results are from Nov, 2024! The technology is substantially more powerful today. That's not to say AI is producing "true" art, or that every AI image is equally good.


I'll leave it as an exercise for you to see whether you can tell which of these four still life images of fruit were made by AI in minutes and which were painted by a 19th-century painter. At least one was made by an AI, and at least one by a person. The answer is at the bottom of this article. For what it's worth, my Twitter audience failed at this task - in fact, they got it completely backwards.


3 ) ...from struggling to engage with complex reasoning, to producing high-quality critiques of essays and academic papers, including often being able to find significant factual errors and logical flaws in people’s work.


I’m not claiming that it can now reliably produce graduate-level essays from nothing, but it is now quite useful to have the best AI models critique your writing and claims – even though, of course, the critiques won’t always be valid and often include a mix of strong and weak points. A slew of new tools have cropped up for academics and researchers to help find mistakes in their work before they submit or publish it.


4 ) ...from frequently "hallucinating" false information on simple topics, to being able to combine built-in knowledge (from the LLM training process) with hundreds of automatically conducted searches to prepare mostly-accurate reports on complex topics in 20 minutes that would take a person multiple hours and sometimes days of work to produce. I've confirmed (on areas that I do know a lot about) that these are typically reasonably accurate, and used them on areas I don't know much about to learn quickly. 


For instance, here's the start of a report automatically generated with AI on Power Posing (the idea that "high-power" body postures can increase feelings of power and other biological or behavioral factors):


5 ) ...from producing music-like sound to producing music that people don't realize is AI-generated, which is already getting a large listenership on Spotify. The Guardian reported in Nov, 2025 that "Three songs generated by artificial intelligence topped music charts this week, reaching the highest spots on Spotify and Billboard charts." 


The amount of AI music that's going to be produced will be staggering, likely with substantial impacts on the music industry. By playing around with this technology for a couple of hours, I was able to get AI to make two full songs that I legitimately really enjoy listening to (just for my own listening, not public consumption) despite the fact that I have no musical skill. (Of course, I'm not claiming this music was any good or that I have good musical taste.) Today, AI songs are getting a lot of listens by people who don't realize they are listening to AI music, and this is likely to increase tremendously. 


In one study by an AI detection company, participants were asked to guess which of three songs was AI (when two were 100% AI created and one was human-made), and only 3% got all answers correct. My understanding is that (at least as far as copyright law in the US goes), despite being trained on enormous numbers of real songs, AI-generated songs are not considered copyright infringement so long as they don't sound too similar to any existing songs. 


6 ) ...from producing highly questionable medical advice, to producing some types of medical analyses on par with doctors (when provided the same patient information), as evaluated by at least some metrics of quality. These AIs are not at the point of replacing doctors, but the best of these AIs are now very useful for helping people understand their medical conditions and for coming up with questions and treatment possibilities to raise with doctors. 


For instance, in a recent study on “conversational diagnostic AI” for medical use, called “AIME”, it was found that: 


“Blinded assessment of differential diagnoses and management plans suggested similar overall plan quality between AMIE and [primary care providers], without significant differences for [differential diagnoses] as well as the appropriateness and safety of management plans. However, [primary care providers] outperformed AMIE in the practicality and cost-effectiveness of Mx plans. AMIE’s [differential diagnosis] included the final diagnosis, per chart review 8 weeks post-encounter, in 90% of cases, with 75% top-3 accuracy, and remained high for the subset of 46 patients where the final diagnosis was confirmed by a diagnostic test.”

7 ) ...from producing glitchy proof-of-concept but often uncanny valley video clips, to producing segments of video that are being included in professional commercials. We don't seem to be that far away from AI being used routinely in movies. 

For some reason, video of Will Smith eating spaghetti has become a standard AI way to show AI video generation progress:



8 ) ...from doing badly at many types of IQ test problems, to being able to solve most types of non-visual IQ test problems at a performance level above that of the average person. In our own tests, current AIs were able to ace or nearly ace a number of non-visual tasks often included in IQ tests (while struggling with some of the visual tasks).


9 ) ...from providing a handy "auto complete" for programmers, to coding entire complex apps/websites/data analyses based on only English language descriptions (and subsequent feedback) from a non-programmer. A number of top tech companies now have more than 70% of their code written by AI, which is an almost unbelievably fast shift from a mere 4 months ago.


In Nov, 2024 (ages ago by the standard of this technology), Paul Graham, founder of the most famous startup accelerator (Y Combinator), wrote


“One of the most obvious indicators [that AI actually helps businesses] is the percentage of code that's now written by AI. I ask all the software companies I meet about this. The number is rarely lower than 40%. For some young programmers, it's 90%... These are YC founders, who tend to be good programmers, and good programmers wouldn't tolerate AI filling their source files with crap.”

A year ago, in Apr, 2025, Microsoft CEO Satya Nadella said that 20%-30% of all of the code in their code repositories was written by AI. But the AI coding agents are substantially better today than they were then. More recently, a spokesperson at Anthropic said that now 70%-90% of all of their code is AI-written.


Having heard impressive things about Claude Code (one of the most popular AI coding agents), I wanted to try it myself. My first ever experiment with it was to see if I could (in less than twelve hours of work on my end, and without writing a single line of code), make a fully functioning web-based game. To my great surprise, it worked. I decided what the game would be and made the decisions regarding how the game works, but Claude Code wrote all the code (over many back and forths). You can try the game here if you're curious: The Stranger's Games (it works best on computers rather than phones).


Since then, I've been using Claude Code to do some other coding projects that I never previously could make time for, and it has enabled me to complete them in less than a tenth of the time compared to if I’d coded them myself. The productivity boost from using it for small coding projects is shocking.


I'm not saying that AI can currently do everything software engineers can do - it can't. But software engineering as a profession is never going to be the same again.


Despite all of this advancement we’ve seen from AI in different domains, it’s still, of course, the case that even the top of the line AIs today make mistakes. AI work often needs checking. But, on the other hand, they make a lot fewer mistakes than they did 18 months ago. And if you think AI making mistakes renders it useless, then you haven’t experienced what it is capable of. 


I also want to be clear that I'm not claiming there couldn’t be a crash in the market due to excessive AI investment - if AI fundraising gets too far ahead of company revenues for too long, a market crash related to AI is possible, even if the technology is extremely powerful.



Some advice for the world we’re in now


The reality is, even if we were to freeze AI technology at its current state, it would have substantial impacts on business, work, and our day-to-day lives. In such a case, it might take years for all of these changes to percolate and for the impacts to reach an equilibrium in society. 


But we are not in that situation - this is a technology in rapid motion. Difficult as it is to swallow, this is the worst this technology will EVER be. 


The changes I described represent what’s happened within roughly the past 18 months. So what is the next 18 months going to look like? 


Nobody knows, but barring extreme catastrophe, AI will be more powerful in every one of the domains that I mentioned compared to what it is right now, and it will also continue advancing in domains I didn’t touch on. AI may one day hit a wall in its improvement, but that hasn’t happened yet.


So, here’s my advice.


Suggestion 1 - Use: If you are not ethically opposed to using AI (a position I can certainly understand), start experimenting with how you can use AI to genuinely improve the quality of your work. However, this can easily go wrong. It's important to steer clear of producing AI slop, of automating aspects of your work that shouldn't be automated, and of sucking the soul or authenticity out of your work.


Consider using Anthropic as your AI, since (as far as I have been able to judge based on what I’ve seen so far) they are the most ethical of the AI companies, while still doing an effective job of staying on the bleeding edge. I give this advice because I believe it's better to be riding the tidal wave than to be swept up by it. On the other hand, if you’re ethically opposed to using AI, that’s a valid choice. Either way, I believe the rest of my suggestions still may apply.


Suggestion 2 - Information: Learn about how AI technology works (at least, the basics that help you understand it), and stay up on what's happening in AI, but in a way that is deliberate and not overwhelming. There is so much mere noise about AI every day, and so much over-claiming and hype, that it can be easy to get sucked into time-wasting, misleading rabbit holes. And, at the same time, there's so much negative sentiment and doomerism that it can be easy to fall into anxiety spirals.


There's a limit to how much AI news is useful, and past that point, it's a waste of your time, or worse than a waste of time. If you’re prone to anxiety from hearing about AI, it’s wise to be especially mindful about curating sources. I think a good way to navigate this is to find a few reliable sources of AI information that don't produce excessive output and focus on important aspects, and simply follow those while avoiding other AI news. The useful sources might be newsletters, media sites, podcasts, or YouTube channels. (On this note, I’d like to say that: although we will talk about AI occasionally in our work, it’s not something we want to spend too much time focused on. Our most fundamental aim is still to give you insights into topics in critical thinking, psychology, decision making, and related areas. We plan to talk about AI only occasionally, as it pertains to those aims.)


Consider following a mix of practical sources (addressing questions like "how does it work", "how might it impact the area you work in", ”how can you use AI effectively”, “what's just now becoming possible”) and sources that discuss the societal implications in a nuanced way. I also occasionally put out podcast episodes about AI on the Clearer Thinking podcast, usually focused on societal or philosophical questions that AI raises, such as this one, this one, and this one


Suggestion 3 - Career: If you can, avoid going into a new career or job that is in the process of major AI disruption. For instance, I don't think now is a good time to become a graphic designer or translator. Spend some time thinking about how your current career or job is likely to be impacted, to start, by the current AI technology. Once you understand what current AI is capable of, ask: 


“What will it look like when the current AI technology is widely adopted in my field?”

“What would it likely accelerate, partially automate, fully automate, or make unnecessary?”

I recommend starting with just the current AI tech because it's already a challenging set of questions, without also having to speculate about future changes in the technology. But once you've attempted to answer that first set of questions, it's worth asking yourself: 


“Suppose that, in the next 18 months, AI makes as much progress as it has in the past 18 months - how might that shift work in my career/at my job further?” 

Suggestion 4 - Personal projects: It may be that those projects (whether creative, fun, or entrepreneurial) that you've always dreamed of doing, but never did due to a lack of time or missing skills, are now far more doable with the use of AI. Consider trying them now (or, if they are still not yet feasible, in 6 months when the technology is even more powerful). I've already done two projects like this that realistically I just wouldn't have done were it not for AI making it so much easier, and I was really pleased to have done them.


That being said, this needs to be done thoughtfully, because using AI can sometimes undermine the meaning or expression involved in a project. If you enjoy painting, then asking an AI to make a "painting" is not going to serve the same purpose or scratch the original itch, and hence could well be pointless. But if you have been really wanting to do some personal project, but you're blocked by one technical hurdle, AI may be able to get you over that hurdle. Or if there's a little app you've always wanted to make for some purpose, you may be able to make it now with "vibe coding" and get it working.


Suggestion 5 - Concerns: If you’re concerned about the impact of AI - whether pragmatic or existential - follow the work of some of the great non-profits aiming to help protect the world from this technology's dangers and downsides. And consider channeling your concern into supporting them - whether with donations of money or time.



I hope this advice is useful to you. If you think I’m being alarmist, or wrong about something here, please feel free to tell me why or what I got wrong, by replying to this email or emailing info@clearerthinking.org. I appreciate opportunities to learn more and update my views.


And finally, in case you’re wondering: this article was written without AI (except for the use of a spelling and grammar error checker).


***


Here are the answers to which of the still life fruit images were AI generated: 

  1. Upper-left: Google AI

  2. Upper-right: Theude Grönland; a Danish/German still life painter.

  3. Bottom-left: Emilie Preyer; a German still life painter

  4. Bottom-right: OpenAI AI

 
 
bottom of page