Earning the skeptic's approval
Setting standards for AI development at Snowpack Data
Kevin Koenitzer | March 6, 2026In our conversations with clients and internally, we've noticed a pattern: The more skilled a practitioner is in their craft, the more skeptical they are of AI-generated outputs in their domain of expertise.
AI has made it easier than ever for laypeople to sketch out complex ideas quickly, build visually impressive mock-ups of tools and draft solutions to various problems. Particularly in the realm of software development, it has removed boundaries previously imposed by technical expertise and lowered the barrier to entry for non-developers with ideas of their own.
AI is a skill you develop, not a tool you use.
There is a difference between generating an output that looks impressive and one that is actually useful, and that difference lies in the skill, judgment, and expertise of the pilot. The barrier in development has not gone away, but rather has shifted away from capability and toward expertise.
In other words: It's not about knowing how to build something, it's about knowing what to build. (i.e. Technical implementation vs. architectural decision-making)
Practical expertise comes from internalizing thousands of micro-judgments about quality through years of practice. It is the fingerprints of that practical expertise that low-quality (read, low-context) AI solutions lack.
Low-quality AI implementations occur when no one wrestles with a problem thoroughly before generating a solution, and domain experts notice it immediately.
The developers of low-quality solutions are resistant to "what" questions:
- "What is this?"
- "What does it do?"
- "What is the functional/technical/ideological framework that underpins the solution?"
It's no coincidence that "what" questions are of the type we would normally rely on people we call "experts" to answer.
The bullsh*t test
At Snowpack Data, our north star is quality. We've spent a lot of time thinking about what it means to provide quality service, deliver quality work, and to be quality humans.
As we've explored implementing AI solutions both for clients and internally we've spent a lot of time trying to figure out how to balance the apparent benefits of AI tooling with our desire to preserve the intense attention to quality that defines our brand.
Through our exploration, we've noticed that the fingerprints of quality work are often immediately apparent to skilled practitioners in ways that less experienced practitioners won't catch; which brings us to the following point:
Expert skepticism is the most reliable signal we have for evaluating whether a given application of AI is adding tangible value to an organization.
A mechanism for valuing AI applications: The 'Skeptic's Approval' method
The central question for any AI application is whether human judgment and expertise is genuinely present in the output. In our experience, outputs that lack those qualities do not survive expert scrutiny.
At Snowpack, any AI-assisted output in a consequential context requires thorough review by a qualified domain expert, with the expectation that they evaluate the level of quality present in the work along the axes of judgment and expertise, among other criteria.
The benefit of this mechanism is that it shifts behavior at scale: Over time, it allows for calibration of standards across the organization as people learn what passes and what doesn't.
This standard requires no enumerated policy, travels across every context, and puts accountability on the person using the tool, not the tool itself.
In simple terms, whenever we build something new or implement a process using AI we ask ourselves whether we would be embarrassed or proud to show the output to the person at the company with the greatest expertise in the relevant domain.
Barring some Dunning-Kruger tendencies among larger groups of individuals that confound the results, we've found this method to work quite well to illustrate the general point.
Within our own firm, we as individuals are explicitly responsible for the outputs we generate regardless of the methods we use to do so. Taking accountability for results is far more important in an environment where "the AI did that" is a common outcome.
It's important to us to make sure that the statement "the AI did that" is treated with caution, and that its increasingly common occurrence doesn't allow it to be re-characterized as an acceptable answer to a question of quality.
The cost of getting there slowly
The main reason to employ this method is not only to preserve quality, but also to preserve capital, and avoid wasting time trying to implement solutions in areas where we don't have the relevant expertise to determine quality and value.
Every organization is going to arrive at the same destination eventually. AI will be used where it genuinely adds value, and it will be abandoned where it doesn't. What is in question is how much it costs to get there.
The organic path, driven by top-down pressure to adopt and with no mechanism for distinguishing useful applications from wasteful ones -- though quintessentially American -- is expensive.
In the race to adopt this new technology, the companies with the least understanding of AI and the most pressure to implement it will accumulate the most waste.
That waste will not show up in the products built when they are deployed, but rather it will show up later on the balance sheet, as salary, as time, and as direct AI costs that have accumulated without generating corresponding returns.
As with all revolutionary technologies, the stumbling is going to happen regardless. The question at the individual company level is how long it lasts and what it costs.
The skeptic's approval method: Final thoughts
If you want proof of the massive wastes being generated by the progress of this technology, you can look pretty much anywhere. From water and energy consumption of data centers, to the number of AI sales and marketing companies in the Bay Area, to your company's exponentially-growing OpenAI or Anthropic usage and bill (and your CFO's blood pressure reading), the waste is all around us. But the most damning piece of evidence we see in support of the skeptic method is this:
While the number of entry-level jobs requiring rote work in tech are disappearing, the demand for domain expertise has never been higher. A brand new data engineer with no experience struggles to find an entry-level position today, but many companies we talk to are struggling to source and hire the right data engineer. This is a signal from the market that, (for now), the pilot is just as important as the tool, if not more so.
As excited as we are about the myriad possibilities AI has opened up for us and our firm, we are more focused on maintaining our commitment to quality and its capacity for us as an engine for growth and innovation, than we are in adopting the latest technology; but we also need to balance those priorities with a market environment that is throwing every possible resource into AI and recognize that we can't afford to miss opportunities made available to us by that proliferation of resources.
These two competing forces are why we rely on the Skeptic's Approval method–allowing us to continue learning and moving forward at the pace we need to, while ensuring we don't get sidetracked along the way.
And now that I've written this with my own two index fingers (just kidding, I used Wispr Flow, obviously), it's time for me to plug it into Claude Code and have it spit out a blog article in html format--The way Dario Amodei, and God, intended.