Learning enjoying and assessing AI

In early February, I participated in the Ashby Workshop on the social impacts of artificial intelligence (AI), held at the Salamander Resort in Middleburg, Virginia. The workshop was organized by Fathom, a philanthropy-supported nonprofit, and brought together about 150 participants under strict no-attribution and no-recording rules. The group was deliberately multidisciplinary, including AI researchers, scientists, ethicists, economists, and policymakers. The goal was to foster open discussion about AI risks, coordination failures, and governance challenges. The workshop is named after W. Ross Ashby (1903–1972), a pioneer of cybernetics who formulated what is known as Ashby’s Law: a system can only be effectively controlled if the controller has at least as much variety as the system itself. In practical terms, this means that simple, static rules are insufficient for governing complex, adaptive systems such as AI. This insight motivates the search for more flexible and adaptive approaches to AI governance—institutions and policies that evolve through continuous learning, monitoring, and reassessment as the technology itself changes.

Salamander is an upscale countryside retreat in a secluded setting, with refined accommodations. The warm and cozy interior stood in sharp contrast to the snowy surroundings, creating an inviting setting for both discussion and relaxation. The workshop was dominated by young, rising professionals; my participation noticeably expanded the average age. Artificial intelligence consists of computer systems that use sophisticated statistical techniques to learn from data, recognize patterns, and make decisions or answer questions that traditionally require human intelligence. Modern AI systems are loosely modeled on neural networks in the brain, enabling them to update their responses as new information becomes available. Unlike humans, however, AI can process vast amounts of information extremely quickly and at scale, giving it clear advantages in speed, memory, and breadth of exposure. 

This post is based on ideas and information I obtained in the Ashby Workshop and conversations with participants, colleagues, and friends. I relied on ChatGPT for editing and contextual support. The post will cover multiple aspects of AI development and policy including Reliability and security, Education and Learning, Productivity, Adoption, and Labor Conditions, Politics and The International Arena, AI and Sustainability, and Ethics

AI can generate seemingly novel ideas, texts, or solutions by recombining patterns from existing data, but this form of creativity is fundamentally derivative. Human creativity, by contrast, often emerges from lived experience, emotion, uncertainty, and the willingness to struggle with problems that initially resist solution. In short, AI excels at rapid, pattern-based innovation, while human creativity is grounded in meaning, purpose, and judgment. The greatest potential lies not in AI replacing human creativity, but in humans using AI as a tool that expands exploration while retaining control over direction, values, and interpretation. I have begun using AI to edit letters, review the literature, and even assist in developing mathematical models. In my view, AI is a highly capable research assistant—but, like any assistant (including myself), it makes mistakes. Its outputs therefore require careful review and judgment before being accepted or signed off on.

 

AI has been the outcome of ongoing research in mathematics, computer science, and other disciplines. Initially, it was mostly basic, but over time, it’s become more applied and resulted in a large array of innovations and products. Some of the major contributors to this research are in Table 1. 

Table 1. The Founders of AI

NamePrimary ContributionContext / Period
Alan TuringFormalized computation and machine reasoningEarly–mid 20th century
John von NeumannComputer architecture; links between computation and the brainMid 20th century
Norbert WienerCybernetics, feedback, and control systemsMid 20th century
Claude ShannonInformation theory and communication under uncertaintyMid 20th century
John McCarthyCoined the term ‘Artificial Intelligence’; founded the field1950s
Marvin MinskySymbolic AI and cognitive models1950s–1970s
Herbert SimonProblem solving, bounded rationality, AI programs1950s–1970s
Allen NewellLogic-based AI and problem-solving programs1950s–1970s
Frank RosenblattPerceptron and early neural networks1950s–1960s
Geoffrey HintonDeep learning; backpropagation; neural networks1990s–2010s
Yann LeCunConvolutional neural networks; applied deep learning1990s–2010s
Yoshua BengioRepresentation learning; deep learning theory2000s–2010s

The establishment of AI requires having innovation and product supply chain. Table 2 provides some of the major stages in these supply chains, their activities, and the main players. It aims to provide some overview on the structure of the industry and it will evolve over time. 

Table 2. The Major Players in AI Supply Chains

Supply Chain StageRoleOrganizationPrimary Focus / Contribution
Knowledge creation & basic research (upstream)Fundamental algorithms, theory, and talent trainingMITAI, robotics, systems engineering
Stanford UniversityMachine learning, foundation models, AI policy
Carnegie MellonRobotics, autonomy, human–AI interaction
Oxford UniversityLearning theory, ethics, applied AI
DARPA / NSFPublic funding, agenda-setting for AI research
Applied research & frontier model development (midstream)Translate theory into large models and systemsOpenAIFrontier language and multimodal models
DeepMindReinforcement learning, scientific AI
AnthropicAligned and safety-focused frontier models
Meta AIOpen models, vision, and language research
Google ResearchAlgorithms, infrastructure, AI systems
Compute, chips, and infrastructure (enabling)Scalable compute, hardware, and cloud infrastructureNVIDIAGPUs and AI accelerators
AMDCPUs and AI accelerators
TSMCAdvanced semiconductor fabrication
Amazon WebCloud compute and data centers
Microsoft AzureCloud compute and AI services
Google CloudCloud infrastructure and AI platforms
Data, tools, and platforms (bridging)Enable training, deployment, and monitoringScale AITraining data and evaluation
Hugging FaceModel sharing and deployment tools
DatabricksData engineering and ML platforms
SnowflakeData infrastructure for AI workloads
Product development & sectoral deployment (downstream)Embed AI into real-world applicationsUiPathEnterprise automation
Palantir TechnologiesGovernment and industrial decision systems
TeslaAutonomous driving and robotics
Sense TimeComputer vision at scale (China)
OXSIGHTAssistive vision technologies
Governance, standards, and oversight (cross-cutting)Regulation, standards, and social governanceEU CommissionAI regulation and rights-based governance
NIST*AI risk management and safety frameworks
OECDInternational AI principles and coordination
CDT**Human rights, accountability, and digital policy

* National Institute of Standards and Technology

** Center for Democracy & Technology

Reliability and security 

AI is reshaping how information is created, shared, and trusted. It has made it much easier to produce content that appears authentic even when it is false. As a result, traditional anchors of credibility—news organizations, expert voices, and human intuition—are under increasing strain. The central challenge is no longer only misinformation, but growing uncertainty about what can be trusted at all.

AI-generated content is often difficult to recognize as fake, forcing people to spend more time and effort deciding what to believe. Over time, this can lead to fatigue, cynicism, and doubt, weakening shared understanding and social trust. At the same time, digital platforms frequently reward attention and engagement rather than accuracy. Surprising or dramatic content tends to spread faster than careful, well-verified information. AI further amplifies this dynamic by allowing bad actors to produce and test large volumes of content quickly, enabling misleading claims to travel faster and farther than corrections.

These dynamics have serious real-world consequences. AI-driven misinformation can influence elections, distort public health decisions, disrupt financial markets, and affect international relations. Governments and institutions often struggle to respond quickly enough, creating risks for democratic processes, economic stability, and public safety.

Addressing these challenges requires moving beyond simple content removal toward improving information transparency. Clear signals about the source of information, how it was created, and whether it has been altered can help people make better judgments. Several technologies demonstrated at the conference showed how difficult it has become to identify false data or fabricated figures, while also offering practical tools that make certain forms of fakery easier to detect.

AI systems themselves should be designed to support critical evaluation rather than passive acceptance. This includes providing access to sources, communicating uncertainty, and encouraging users to question and verify outputs. Public policy also has a role to play by setting expectations for transparency and accountability, while education and journalism remain essential for strengthening people’s ability to assess information.

False information cannot be eliminated entirely. The goal is instead to create an environment in which reliable, well-sourced information is more likely to stand out, earn trust, and guide decision-making. 

 

Education and Learning

The discussion highlighted that AI is likely to reshape learning in powerful but uncertain ways, and that it may widen existing educational gaps if not used carefully. AI is already part of young people’s daily lives. Students encounter it when scrolling through social media, asking questions of chatbots, or using tools that summarize readings or solve problems. These technologies shape not only what students see, but how they study, communicate, and think about their own abilities.

Many AI-driven platforms are built to capture attention. Content that is emotionally intense or extreme is often promoted because it keeps users engaged. For some young people, this translates into higher stress, anxiety, and constant comparison with others. I have seen students arrive in the classroom already distracted and exhausted, carrying pressures that originate online but affect their ability to learn and focus. AI also blurs the boundary between people and machines. Some students treat chatbots as trusted advisors, even in sensitive areas, without fully understanding their limits. When AI substitutes for human guidance rather than supporting it, students can lose opportunities for mentorship, discussion, and emotional support.

Data and privacy raise additional concerns. AI systems quietly build profiles based on students’ behavior—what they search for, how they respond, how quickly they complete tasks. These profiles can influence what opportunities or recommendations they receive later, often without their awareness. For young people, these long-term consequences are difficult to anticipate or contest.

AI is now entering schools and universities, changing how students learn in very concrete ways. Some students use AI productively: to compare explanations from different sources, explore alternative solution methods, or check their reasoning after struggling with a problem. Used this way, AI can deepen understanding and broaden skills. Other students, however, rely on AI to generate answers quickly, skipping the difficult process of thinking through a problem. In my own education, especially in mathematics and economics, repeated failure and frustration were essential. I learned far more from working through mistakes than from seeing the correct answer immediately.

From a policy perspective, AI presents both opportunities and risks for young people. Youth safety must be a priority in the design and use of AI, with age-appropriate boundaries and basic standards for transparency, accountability, and protection. Regulation must be combined with ongoing oversight and collaboration among educators, companies and families as well as continuous monitoring as AI evolves. Yet, AI should not be treated only as a hazard. While it can create risks, it can also enhance learning, creativity, and access to knowledge. The policy challenge allowing young people to benefit from new tools while maintaining guardrails that reduce harm and build resilience in a world where AI is becoming ubiquitous.

AI is already entering schools and universities. Used well, it can personalize learning and support teachers. Used poorly, it can substitute for thinking, weakening students’ ability to persist through difficulty and learn from failure. Effective education policy should therefore emphasize integration rather than substitution. AI should support learning without eliminating productive struggle, which is essential for developing judgment, resilience, and independent problem-solving. This requires changes in teaching methods from early childhood through university, shifting the focus from memorization to critical thinking, interpretation, and ethical use of AI.

Productivity, Adoption, and Labor

While AI adoption has already been significant across many sectors, we are still far from observing its full impacts. AI is generating measurable productivity gains across many fields. In the short run, these gains mostly reflect faster task completion and reductions in routine work. In some areas, however, AI delivers more durable benefits by augmenting skilled human judgment and accelerating learning and discovery rather than simply automating tasks—especially in research and scientific innovation.

Presenters cited studies showing 20–50 percent reductions in time spent on routine activities such as drafting documents, summarizing information, and writing code. AI often performs software development tasks 20–30 percent faster and accelerates customer support by 15–35 percent. These rates of improvement are unlikely to continue indefinitely. By contrast, the potential gains from AI in research appear much larger. Traditional research processes are slow, costly, and uncertain. In biomedicine, developing a new drug typically takes 10–15 years, costs $1–2 billion, and has failure rates exceeding 90 percent. AI tools that accelerate literature review, hypothesis generation, and experimental design can substantially shorten these cycles. In biology, AI-based protein structure prediction has reduced tasks that once took months or years to hours or days. In materials science and chemistry, AI-guided experimentation has reduced the number of physical experiments required by 30–70 percent, lowering both costs and time to discovery.

These research productivity gains compound over time. Faster discovery expands the stock of knowledge, improves data and models, and accelerates future innovation. Even modest increases—on the order of 5–10 percent per year in effective research productivity—can generate large cumulative effects over decades, making AI a potentially powerful engine of long-run economic growth.

Gains in research are also likely to spill over into healthcare and the life sciences. AI-assisted diagnostics have improved accuracy by 5–15 percent in areas such as medical imaging, while reducing clinician time per case by 10–30 percent. Administrative automation can save physicians 1–2 hours per day, raising productivity by 10–20 percent. At the same time, widespread adoption depends on demonstrable reductions in errors to maintain trust.

In manufacturing, energy, and agriculture, AI generates steady gains by reducing waste and improving resource use. Precision agriculture can cut fertilizer and water use by 10–25 percent without reducing yields, while predictive maintenance can lower equipment downtime by 20–50 percent. Because these gains improve efficiency rather than replace judgment, they tend to persist.

In business, law, and finance, AI delivers rapid efficiency gains. Document review, contract analysis, and report generation can be completed 50–90 percent faster, primarily affecting support roles. While these tools save time and effort, automation risks reducing early-career opportunities and weakening human capital formation unless training models evolve.

Overall, the most sustainable productivity gains arise in fields that generate new knowledge, improve decision-making under uncertainty, and reduce waste. The long-run value of AI will depend less on how many tasks it automates and more on how effectively it expands human capacity to learn, innovate, and solve complex problems.

AI adoption remains early but is accelerating, and it is uneven across sectors and regions. Adoption is deep in information technology, software, finance, insurance, professional services, and media; moderate in manufacturing, healthcare, retail, and logistics; and still emerging in education, agriculture, and the public sector. Advanced economies and large firms are adopting faster, raising concerns about inequality. Europe is slower on average than the United States and China, while in many developing countries adoption remains limited to a few areas, often linked to global trade.

The technology adoption literature suggests that these impacts should be interpreted with caution. Adoption typically follows a gradual and uneven diffusion process shaped by profitability, learning costs, complementary investments, and institutions. Early adopters capture initial gains that often reflect “low-hanging fruit” such as time savings. As emphasized in the learning-by-doing literature, productivity improves with cumulative use as experience lowers costs, enhances performance, and expands networks that enable knowledge spillovers. As a result, current evidence reflects a transitional phase: adoption is significant but incomplete, and productivity effects are still unfolding.

Public policy can shape adoption by reducing uncertainty, investing in skills and public goods, lowering regulatory barriers, and expanding access to complementary resources. When social benefits exceed private returns, or when adoption creates negative side effects, targeted incentives and safeguards may be needed to guide both the pace and direction of development.

Finally, AI will not affect all workers equally, and its impact extends beyond office tools to robotics and physical systems in manufacturing, logistics, and services. Routine white- and blue-collar jobs face the greatest disruption. Younger workers may lose traditional entry points for learning, while older workers may struggle with retraining. At the same time, AI can create new opportunities by augmenting workers—for example, technicians supervising automated systems or nurses using AI-enabled diagnostics. Policy responses should emphasize rapid reskilling, employer-provided training, and support for job transitions to ensure that productivity gains translate into broadly shared opportunity rather than widening inequality.

Politics and The International Arena

Representatives from both the Biden and Trump administrations attended the conference, and I was struck by I was struck by both the collegiality and the policy continuity between the two administrations at the conference. Both see AI as a domain in which the United States must achieve global leadership—though Biden emphasizes governance, safety, and accountability, while Trump focuses on competitiveness and limiting regulatory constraints. Biden’s policy emphasized supporting AI power infrastructure and clean energy as part of federal AI infrastructure planning, while Trump’s policy supports AI infrastructure broadly (data centers, build-out, grid connections), but its AI-power emphasis has been around deregulation, permitting, and enabling private-sector–led growth.

The development and use of AI vary significantly across U.S. states and are highly concentrated geographically, most notably in California, which benefits from dense research infrastructure, deep human capital, venture finance, and sustained public support. At the same time, AI expansion is increasingly constrained by energy availability, electricity prices, water use for cooling, and permitting for data centers and grid upgrades—factors that differ sharply across states and shape where AI can scale. States also vary widely in their regulatory approaches: some emphasize strong protections focused on transparency, fairness, labor impacts, and public safety (notably California and Illinois), while others prioritize innovation-led growth with lighter or more targeted regulation (such as Utah and New Jersey). These differences reflect local political cultures, industrial structures, and infrastructure capacity, but they also create a fragmented national landscape and intensify tensions between state autonomy and federal efforts to coordinate AI governance, energy planning, and safety standards.

By comparison, the European Union has adopted a more centralized and precautionary model, emphasizing harmonized regulation and fundamental rights, while China pursues a state-coordinated approach that tightly integrates AI development with industrial policy, energy systems, and data control. The U.S. lies between these models, relying on decentralized experimentation and market-driven innovation, yet facing growing pressure for federal coordination to address infrastructure, environmental impacts, and global competitiveness.

The discussion positioned the U.S.–China AI rivalry less as a narrow technology race and more as a competition between industrial systems. AI leadership depends on the performance of an integrated ecosystem—advanced semiconductors, energy and compute infrastructure, data, talent, and downstream markets—where scale, coordination, and speed matter. Export controls on advanced chips may slow China at the margin, but they cannot substitute for building durable capacity at home and with allies, and overreliance on restrictions risks diverting attention from the U.S.’s own structural constraints.

From a dynamic comparative advantage perspective, China’s strength lies in workforce scale, manufacturing depth, and rapid deployment, while the U.S. advantage rests on frontier research, capital markets, innovation networks, and alliance-based supply chains. Current policy choices, however, risk undermining these advantages. China is often moving faster in deployment, in part due to lower regulatory burdens, while the U.S. and Europe face delays from fragmented and precaution-heavy regulatory approaches. While safeguards are essential, excessive risk aversion can impose high opportunity costs by slowing learning-by-doing, delaying diffusion, and allowing competitors to move down cost curves more quickly.

At the same time, U.S. AI capacity is inseparable from its allies. Taiwan’s central role in advanced chip manufacturing highlights both the strength and fragility of this interdependence, while Europe’s attempt to pair regulation with industrial investment underscores the need for coordination rather than fragmentation. Yet rising tariffs, trade uncertainty, and restrictive immigration policies weaken the very networks through which the U.S. exercises its comparative advantage—raising costs, discouraging investment, and constraining the high-skill talent base critical to AI research and deployment.

In a dynamic setting, leadership in AI requires systems that combine innovation with rapid diffusion, learning-by-doing, and scale through cooperation. Policies that strain alliances, restrict trade, or limit skilled migration risk locking the U.S. into a slower adjustment path, eroding its relative advantage over time. Sustained AI leadership therefore requires not only domestic investment, but a strategic commitment to openness, allied integration, and regulatory frameworks that enable speed, learning, and resilience rather than fragmentation and delay.

Another dimension of AI competition is its growing use in military applications. AI is already deployed in intelligence analysis, surveillance, targeting support, and logistics, with increasing pressure to extend its role to autonomous weapons. Delegating life-and-death decisions to algorithms entails serious risks, including model error, bias, adversarial manipulation, rapid escalation driven by machine-speed decisions, and weakened human accountability.

Unchecked military AI deployment may create an arms-race dynamic in which speed and automation are prioritized over reliability and control. Failures of military AI can have irreversible consequences; these risks are fundamentally different from those in civilian applications. Thus, international agreements may be needed autonomous AI controlled weapons. Analogous to arms-control regimes, such agreements could require meaningful human control over lethal force and establish shared norms to reduce escalation risks. While enforcement would be difficult, the absence of agreed constraints may prove more destabilizing in an increasingly competitive AI environment.

AI and Sustainability

 The discussion emphasized that AI can play a meaningful role in climate mitigation and adaptation by improving forecasting, managing complex energy and environmental systems, and supporting better decision-making under uncertainty. AI can also play a major role in developing the global bioeconomy, where AI-enhanced biotechnologies enable the use of living organisms to produce chemicals, fuels, and pharmaceuticals, sequester carbon, and replace nonrenewable—particularly fossil—resources. More broadly, AI tools can help target investments, reduce waste, and strengthen resilience to extreme weather, though their effectiveness depends on data quality, institutional capacity, and incentives that support real-world adoption.

By increasing agricultural and food productivity while reducing environmental impacts—through greater precision, improved input-use efficiency, recycling and reuse, and better supply-chain coordination—AI can help protect biodiversity, safeguard oceans, and contribute to global sustainable development. A key challenge for deploying advanced technologies such as AI and biotechnology in the bioeconomy lies in regulatory constraints and resistance from various stakeholders. Addressing this requires better communication with policymakers and the public about the benefits of these technologies, alongside serious engagement with legitimate concerns.

The gains from AI are strongest when it supports farmers’ decisions rather than replacing them, and when it is paired with extension, training, and policies that allow innovations to diffuse beyond large producers. Across all areas, a shared theme was that AI delivers its greatest benefits when it augments human judgment and accelerates learning rather than simply automating tasks. The discussion concluded that realizing AI’s potential for climate, food, and health requires thoughtful governance, sustained investment in skills and data, and close coordination between the public and private sectors to ensure that benefits are widely shared.

The rapid expansion of AI capacity has a dual and context-dependent effect on greenhouse gas (GHG) emissions. AI development increases electricity demand through energy-intensive data centers and specialized hardware, potentially raising emissions when powered by fossil-fuel-dominated grids. However, AI is also a general-purpose enabling technology that can significantly reduce emissions when applied to energy systems and climate governance. The net climate impact of AI therefore depends critically on deployment choices, electricity sources, and institutional design.

AI can lower emissions by improving the efficiency and flexibility of electricity systems. Machine-learning tools enhance forecasting of renewable generation, optimize grid dispatch, reduce curtailment, and shift demand toward periods with lower marginal emissions. These capabilities are essential for integrating high shares of intermittent renewables while maintaining reliability and controlling costs. AI further strengthens climate policy by improving emissions measurement and accountability. Advances in remote sensing and data analytics enable near-real-time monitoring of emissions across power, industry, and transport, reducing reliance on self-reported inventories.

The work of our department graduate, Gavin McCormick, illustrates how AI can align electricity consumption with cleaner generation and make emissions visible and actionable through data-driven monitoring. More broadly, AI’s climate benefits are maximized when paired with low-carbon power and policies that prioritize energy optimization and transparent emissions tracking. Under these conditions, AI can accelerate decarbonization rather than reinforce carbon-intensive growth.

Ethics

 Advances in physics and information science have enabled the development of AI systems whose behavior is shaped by training data and algorithms, and which can increasingly operate autonomously. Once trained and supplied with energy, these systems can determine their own actions, adapt to new information, and perform tasks with limited human intervention. AI already takes many forms—digital, visual, auditory, and physical—and its outputs range from data and images to actions carried out by robots in the physical world. As AI becomes embedded in machines that act continuously, autonomy is no longer defined only by intelligence or algorithms, but also by access to energy. Today, that energy is almost entirely electricity, provided through power grids, dedicated energy sources, and on-site generation and storage, linking the evolution of AI directly to energy infrastructure and environmental constraints.

Looking forward, AI systems may become increasingly independent, potentially managing their own energy use and setting their own directions and actions. Developments that once appeared confined to science fiction may gradually become feasible. This prospect lies at the center of contemporary debates about AI governance and ethics. In many domains, AI systems already outperform humans, and their growing capabilities may lead some to attribute to them exceptional authority or even quasi-divine status. Such narratives are not merely cultural curiosities; they influence public trust, shape regulatory responses, and affect how responsibility and accountability are assigned. Current debates therefore emphasize the need for governance frameworks that address not only algorithmic safety and bias, but also energy use, environmental impacts, and the preservation of human agency. Managing the relationship between increasingly autonomous AI systems is challenging. Will Ai evolve to become our god, our children, our competitors or friends depends on ethical norms requires coordinated efforts in technology design, energy systems, education, governance and policy. 

Major religious traditions and modern political institutions such as democracy can be viewed as humanistic frameworks in the sense that they place human dignity, moral worth, and limits on power at the center of social order. Although grounded in different sources of authority—transcendent moral claims in religion and popular sovereignty and law in democracy—both traditions directly or indirectly support the ideas of human rights, liberty, accountability, and the protection of the vulnerable. These commitments have been realized imperfectly and through historical struggle, but they have generated enduring safeguards against the concentration of unchecked power. In contemporary debates on AI governance, this humanistic legacy underpins humanitarian ethics that insist AI systems remain tools subject to human oversight, moral responsibility, and institutional control, rather than autonomous authorities. 

A humanistic approach to AI requires both establishing principles for the development of the technology, as well as mechanisms for adaptations of humanity to these new capabilities. That involves education, development of therapeutic and psychological approaches to address negative side effects of the use of the technology. It also challenges international relationships in terms of the ability to use the community to enhance overall human development and share the fruits of knowledge rather than use it as a mechanism of concentration of power and improve competitiveness. These are reasons to continuing research and assessment of the technology as it evolves, enhancing public education and awareness, and developing policies and institutions that will lead to a path of humanistic AI development. 

Lessons from the Ashby Workshop

The Ashby Workshop reinforced that artificial intelligence is neither a simple tool nor an uncontrollable force, but a powerful, evolving system whose impacts depend critically on how it is designed, governed, and adopted. Across domains—education, research, labor markets, international competition, security, and sustainability—a consistent lesson emerged: AI delivers its greatest and most durable value when it augments human judgment, accelerates learning, and supports better decision-making, rather than when it merely automates tasks or replaces human agency.

The discussions highlighted both promise and risk. AI can transform productivity, accelerate scientific discovery, strengthen climate resilience, advance the bioeconomy, and improve health outcomes. At the same time, it raises serious challenges related to trust, misinformation, inequality, workforce disruption, geopolitical competition, and the delegation of life-and-death decisions in military settings. These challenges cannot be addressed through static rules or narrow restrictions alone, but require adaptive governance, institutional learning, and international coordination—precisely the insight captured by Ashby’s Law.

Equally important was the value of the meeting itself. The opportunity to learn from diverse perspectives, engage openly across disciplines and generations, and build new professional relationships underscored the importance of forums like this. Such gatherings are essential not only for understanding AI, but for shaping its trajectory toward outcomes that are socially beneficial, resilient, and widely shared ethical principles.

Source: Gavin McCormick, Climate TRACE

2 thoughts on “Learning enjoying and assessing AI”

  1. Thanks for the interesting perspective. AI is like driving a car in a highly populated city that does not follow any law, traffic rules or even consider a human in the system. Now we see more accidents that its original purpose of transportation. Less we know AI, more challenging to define rules and regulation.

  2. Ruslana Rachel Palatnik

    Thank you, David! This post offers a really thoughtful overview of the pros and cons of AI across a wide range of domains. Some of the issues are already well known and widely debated, but others are more subtle and less visible. The clear challenges are developing critical thinking in students that are learning with AI and preservation of human agency. The takeaway is that AI cannot be governed effectively with static rules — it requires adaptive, evolving regulation that can keep pace with the technology itself.

Leave a Reply

Scroll to Top

Discover more from Professor Zilberman

Subscribe now to keep reading and get access to the full archive.

Continue reading