Skip to main content

ARCS Pittsburgh Program: AI: Disruptor or Enhancer or Both?

5:15 PM – 7:15 PM ET
4800 forbes ave
Hamburg Hall
Room 1204
Pittsburgh,
United States

ARCS Pittsburgh Program: AI: Disruptor or Enhancer or Both?
 Hosted by CMU on its campus
 Date: May 27, 2026
 Time: 5:15pm-7:15pm
 Timely Topic of Great Importance: “AI: Disruptor or Enhancer or Both?”
 Location: Hamburg Hall, room 1204, 4800 Forbes Ave. 
 Parking: We encourage car-pooling especially in locations on our partner-university campuses. Three parking options: Morewood Lot after 5pm free; CMU East Campus Lot (3 hour fee); Carnegie Art Museum Lot (less expensive and about the same distance).
 We encourage you to bring a guest!
 What’s Special: AI dominates our conversations– is it for good or evil? We have the great opportunity to hear from Dr. Jeremy Avigad, CMU Professor of Philosophy and Mathematical Sciences and the director of the newly established Institute for Computer-Aided Reasoning in Mathematics, a National Science Foundation mathematical sciences research institute.  He is at the center of the shift in the future of AI from being less of a clever parrot to serving as an intellectual tool accelerating work the way a microscope accelerated biology or a telescope expanded astronomy—it allows us to see more than we could unaided.

^^^
How Artificial Intelligence Is Changing the Way We Think, Work, and Discover
Two years ago, many mathematicians and scientists viewed AI as a very sophisticated autocomplete machine. It could write fluent sentences, summarize papers, and imitate expertise—but it was not trusted to reason. The common criticism was: “It predicts the next word; it does not actually think.” That skepticism was reasonable. Early systems often made confident mistakes, invented facts, and failed badly on problems that required several logical steps. They looked smart but were often only rearranging patterns they had seen before.

What has changed is not that AI suddenly became “conscious” or human-like, but that it became much better at handling structured thought. Instead of only producing quick answers, newer systems can break problems into parts, test alternatives, check their own work, and revise when something does not fit. In mathematics, science, law, and medicine, that matters far more than elegant wording. A simple analogy is the difference between a student who memorizes many examples and one who can work through a new mathematical proof step by step. Earlier AI was closer to the first student. Today’s stronger systems can often behave more like the second—especially when guided properly.

For mathematicians, the surprise came when AI began helping with genuine problem- solving tasks: suggesting proof strategies, identifying hidden assumptions, checking whether an argument fails in edge cases, or connecting ideas across fields that a person might miss. It is not replacing mathematical creativity, but it is increasingly acting like a strong research assistant. This does not mean AI is infallible. It still makes errors and must be checked carefully. Mathematicians are right to remain cautious. But the argument has shifted. The question is no longer “Can AI reason at all?” but rather “When does its reasoning become reliable enough to trust?” That is a major change. The best way to think about AI now is not as an oracle that knows truth, but as a powerful collaborator: fast, tireless, and surprisingly capable of following chains of logic—provided a skilled human remains the judge. In that sense, AI has moved from being seen as a clever parrot to being recognized as a
genuine intellectual tool. And for a field as demanding as mathematics, that is a remarkable shift.