The biggest AI impacts won’t be borne out in a calculus of jobs but rather in seismic shifts in the level of expertise required to do them. In our article in Harvard Business Review, Joseph Fuller, Michael Fenlon, and I explore how AI will bend learning curves and change job requirements as a result. It’s a simple concept with profound implications. In some jobs, it doesn’t take long to get up to speed. But in a wide array of jobs, from sales to software engineering, significant gaps exist between what a newbie and an experienced incumbent know. In many jobs with steep learning curves, our analysis indicates that entry-level skills are more exposed to GenAI automation than those of higher-level roles. In these roles, representing 1 in 8 jobs, entry-level opportunity could evaporate. Conversely, about 19% of workers are in fields where GenAI is likely to take on tasks that demand technical knowledge today, thereby opening up more opportunities to those without hard skills. Our analysis suggests that, in the next few years, the better part of 50 million jobs will be affected one way or the other. The extent of those changes will compel companies to reshape their organizational structures and rethink their talent-management strategies in profound ways. The implications will be far reaching, not only for industries but also for individuals and society. Firms that respond adroitly will be best positioned to harness GenAI’s productivity-boosting potential while mitigating the risk posed by talent shortages. I hope you will take the time to explore this latest collaboration between the The Burning Glass Institute and the Harvard Business School Project on Managing the Future of Work. I am grateful to BGI colleagues Benjamin Francis, Erik Leiden, Nik Dawson, Harin Contractor, Gad Levanon, and Gwynn Guilford for their work on this project. https://lnkd.in/ekattaQA #ai #artificialintelligence #humanresources #careers #management #futureofwork
Implications of AGI Development for Businesses
Explore top LinkedIn content from expert professionals.
-
-
New on AI Snake Oil: Arvind Narayanan and I argue that AGI will not lead to rapid economic effects, the race to AGI is not relevant for great power competition, we won't know AGI when we have built it, and AGI does not imply impending superintelligence. In other words, AGI is not a milestone: https://lnkd.in/exDQbafU 1) Even if general-purpose AI systems reach some agreed-upon capability threshold, we will need many complementary innovations that allow AI to diffuse across industries to realize its productive impact. Diffusion occurs at human (and societal) timescales, not at the speed of tech development. 2) Worries about AGI and catastrophic risk often conflate capabilities with power. Once we distinguish between the two, we can reject the idea of a critical point in AI development at which it becomes infeasible for humanity to remain in control. 3) The proliferation of AGI definitions is a symptom, not the disease. AGI is significant because of its presumed impacts but must be defined based on properties of the AI system itself. But the link between system properties and impacts is tenuous, and greatly depends on how we design the environment in which AI systems operate. Thus, whether or not a given AI system will go on to have transformative impacts is yet to be determined at the moment the system is released. So a determination that an AI system constitutes AGI can only meaningfully be made retrospectively. 4) Businesses and policy makers should take a long-term view. Businesses should not rush to adopt half-baked AI products. Rapid progress in AI methods and capabilities does not automatically translate to better products. Building products on top of inherently stochastic models is challenging, and businesses should adopt AI products cautiously, conducting careful experiments to determine the impact of using AI to automate key business processes. A “Manhattan Project for AGI” is misguided on many levels. Since AGI is not a milestone, there is no way to know when the goal has been reached or how much more needs to be invested. And accelerating AI capabilities does nothing to address the real bottlenecks to realizing its economic benefits. We plan to keep writing on this topic, and have a series of essay planned on the theme of AI as Normal Technology. Follow the AI Snake Oil substack for more.
-
To all Executives looking to build AI systems responsibly, Yoshua Bengio and a team of 100+ of AI Advisory Experts from more than 30 countries recently published the International AI Safety Report 2025, consisting of ~300 pages of insights. Below is a TLDR (with the help of AI) of the content of the document that you should pay attention to, including risks and mitigation strategies, as you continuously deploy new AI-powered experiences for your customers. 🔸AI Capabilities Are Advancing Rapidly: • AI is improving at an unprecedented pace, especially in programming, scientific reasoning, and automation • AI agents that can act autonomously with little human oversight are in development • Expect continuous breakthroughs, but also new risks as AI becomes more powerful 🔸Key Risks for Businesses and Society: • Malicious Use: AI is being used for deepfake scams, cybersecurity attacks, and disinformation campaigns • Bias & Unreliability: AI models still hallucinate, reinforce biases, and make incorrect recommendations, which could damage trust and credibility • Systemic Risks: AI will most likely impact labor markets while creating new job categories, but will increase privacy violations, and escalate environmental concerns • Loss of Control: Some experts worry that AI systems may become difficult to control, though opinions differ on how soon this could happen 🔸Risk Management & Mitigation Strategies: • Regulatory Uncertainty: AI laws and policies are not yet standardized, making compliance challenging • Transparency Issues: Many companies keep AI details secret, making it hard to assess risks • Defensive AI Measures: Companies must implement robust monitoring, safety protocols, and legal safeguards • AI Literacy Matters: Executives should ensure that teams understand AI risks and governance best practices 🔸Business Implications: • AI Deployment Requires Caution. Companies must weigh efficiency gains against potential legal, ethical, and reputational risks • AI Policy is Evolving. Companies must stay ahead of regulatory changes to avoid compliance headaches • Invest in AI Safety. Companies leading in ethical AI use will have a competitive advantage • AI Can Enhance Security. AI can also help detect fraud, prevent cyber threats, and improve decision-making when used responsibly 🔸The Bottom Line • AI’s potential is massive, but poor implementation can lead to serious risks • Companies must proactively manage AI risks, monitor developments, and engage in AI governance discussions • AI will not “just happen.” Human decisions will shape its impact. Download the report below, and share your thoughts on the future of AI safety! Thanks to all the researchers around the world who took created this report and took the time to not only surface the risks, but provided actionable recommendations on how to address them. #genai #technology #artificialintelligence
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development