top of page

Learn about ClearerThinking's inner workings and the power of internet-based research on Rationa

Updated: Sep 29, 2021


banner

About a month ago, ClearerThinking founder Spencer Greenberg appeared on the podcast Rationally Speaking. Hosted by Center For Applied Rationality cofounder Julia Galef, Rationally Speaking features free-ranging discussions on science, technology, and human reasoning. Spencer and Julia's conversation focused on how we do what we do here at ClearerThinking, with a particular emphasis on how we use web-based tools to perform social science research more quickly, cheaply, and flexibly than conventional techniques can.

If you've ever wondered about the philosophy and principles that guide ClearerThinking's work — or if you're interested in the nitty-gritty of our research, such as our approach to the p-hacking problem — this conversation will make for an interesting and entertaining listen. You can hear the whole thing here, but we've summarized a few interesting updates about future programs and conceptual tidbits about our research process from the podcast below:

Our sister company, UpLift, is building an automated app for people with depression. Uplift was built around principles from cognitive behavioral therapy (CBT), one of the best-researched and most effective therapeutic techniques for mood disorders like clinical depression, and aims to deliver its benefits at low cost to anyone with an internet connection. The app's still in its developmental stages, but we've seen some exciting results while testing early versions of it. The first 80 test users who completed the whole program experienced a 50% reduction in depression symptoms (!) over only 34 days, on average.

We're also close to launching the Decision Advisor, an automated tool designed to make challenging, complex life decisions easier to work through. This app is very close to completion — you may remember occasional mentions of our research into the subject in our newsletters over the past several months. Keep an eye out for further announcements about this one!

Spencer and Julia spent some time in the podcast discussing our use of Amazon's Mechanical Turk service for research purposes. (Mechanical Turk is a crowdsourcing workforce market that calls itself "artificial artificial intelligence"; it allows you to remote-hire large numbers of people to perform short tasks that are currently too complex for machines, such as participating in scientific studies like the ones we design and run when we're developing new tools.) They focused in particular on the demographic problem posed by using Mechanical Turk as a source population for experiments. Specifically, Mechanical Turk users tend to differ from the general population in terms of age, financial situation, and other factors. As a result, Mechanical Turk users don't constitute a totally representative sample of the society at large.

ClearerThinking's approach to this problem is relatively simple: We try to test our materials on populations that are demographically similar to the audience that uses our tools. (That's you guys!) That's why we beta test our tools with real ClearerThinking users. And Mechanical Turk users also skew in a similar direction to our user base — they tend to be younger and more tech-savvy, for instance. This underpins an important difference between ClearerThinking's approach and conventional academic research: we're working to develop impact-oriented interventions to help people deal with specific problems, rather than testing hypotheses about humans. For instance, where we design, build, test, study, and then release techniques and tools to try to help people form new positive habits, academics would traditionally focus on the hypothesis and hypothesis-testing parts of the same process.

Julia and Spencer also discussed another important feature of ClearerThinking's process: our ability to run as many studies as needed in relatively short timespans to fully understand the implications of our findings. This feature also sets us apart from conventional academic research in an important way: we face far fewer limitations on how closely and iteratively we can examine the issues we study. In conventional social science, publication deadlines and budgetary concerns often constrain researchers. That's not the case for us, as our mission is making tools that help people, and we don't have to worry about the "publish or perish" pressures that face academia. In one memorable incident, we ran 12 follow-up studies on a single finding to help make sure we got it right.

The conversation also touched on one of our most cherished research tactics: asking people to explain the reasoning behind their answers in our studies. These explanations have totally changed the meanings of many bizarre and confusing phenomena we've encountered in our research. For instance, one of our studies that we thought was measuring individuals' predisposition towards the Sunk Cost Fallacy turned out to be measuring something completely different: people's discomfort at not finishing their food when they're at a meal with another person. If we hadn't asked participants to explain their answers, we wouldn't have figured out that we were measuring the wrong thing.

bottom of page