Identifying Emerging Risks in Financial Services

Explore top LinkedIn content from expert professionals.

  • View profile for Joshua Rosenberg

    Chief Risk Officer, Erebor Group

    15,310 followers

    "Third-party service providers, including fintech firms, can offer consumers the potential for access to new or better services, but such arrangements also provide greater opportunity for malicious actors to gain access to private data. Specifically, such emerging technologies are often vulnerable to exploitation by tech-savvy hackers looking to profit from technical and financial vulnerabilities in these technologies.   Of particular potential risk is the rapid adoption by financial institutions of application programming interfaces, which provide accessible gateways into firms’ information (often relied on by fintech platforms for information sharing) and may increase the risk of data breaches, especially of customers’ personal or sensitive information, if not effectively secured and permissioned.   The adoption and evolution of machine learning tools will also introduce potential new risks. Machine learning capabilities could drive improvements in the automation of information security controls, such as intrusion detection and data loss prevention. Threat actors, however, could also use machine learning capabilities to automate cyber reconnaissance and attacks, further increasing the likelihood and impact of cyber incidents.   The recent deployment of machine learning tools, including generative artificial intelligence technologies, may also provide threat actors with improved methods for performing social engineering, email phishing, and text messaging smishing attacks compromising access into firms’ systems, emails, databases, and technology services."   — From: Board of Governors of the Federal Reserve System, Cybersecurity and Financial System Resilience Report, August 2023 https://lnkd.in/e8ggDqsX

  • View profile for Jen Gennai

    AI Risk Management @ T3 | Founder of Responsible Innovation @ Google | Irish StartUp Advisor & Angel Investor | Speaker

    4,096 followers

    Thinking of using agentic AI in FinancialServices? Be prepared to reshape your risk approach and how you monitor risk across a system's lifecycle given AI agents' real-time decision-making and action automation. 3 emerging monitoring considerations: ⚠️ Assess risk on an agent-by-agent basis, and how they interact, but also across the agentic AI system (taking an ""agent-aware" risk approach). ⚠️ Move from static risk assessments to dynamic, adaptive monitoring systems that can detect emerging patterns and potential instabilities before they cascade into systemic issues. AI itself can help us in those areas as it's able to monitor and act in real time. ⚠️ Test and simulate complex agent interactions, learning behaviors, and potential feedback loops, to identify new and emerging threats. (see more in my previous "preventing AI risks cascading through your system" video https://lnkd.in/e8sihSE8 ) #agentic #AI #FinancialServices #T3

  • View profile for Enrico Santus

    Principal Technical Strategist, HAI & Academic Engagement in the Office of the CTO @ Bloomberg

    8,713 followers

    Amazing work by my colleagues at Bloomberg! The first paper, “RAG LLMs are Not Safer: A Safety Analysis of Retrieval-Augmented Generation for Large Language Models” -- co-authored by Bang An (a Ph.D. student at University of Maryland who did this work during their summer internship at #Bloomberg), Shiyue Zhang, and Mark Dredze (also affiliated with John Hopkins University) -- introduces an unexpected finding: retrieval augmented generation (#RAG) – a ubiquitous technique that pulls in external data sources to make #LLM outputs more accurate – can actually make responses *less* safe and less reliable. In evaluating 11 popular LLMs with more than 5,000 harmful questions, the use of a RAG framework resulted in an increased number of inappropriate, misleading, or unsafe outputs compared to when the non-RAG setting was used. The second paper, “Understanding and Mitigating Risks of Generative AI in Financial Services” -- co-authored by Sebastian Gehrmann, Claire Huang, Xian Teng, Sergei Yurovski, Iyanuoluwa Shode, Chirag Patel, Arjun Bhorkar, Naveen Thomas, John Doucette, David Rosenberg, Mark Dredze, and David Rabinowitz -- builds on this by proposing a taxonomy of guardrails designed specifically for domain-specific risks for GenAI applications within capital markets financial services. This first-of-its-kind taxonomy addresses financial services misconduct, financial services impartiality, counterfactual narrative, and other risks unique to our industry that general-purpose #AI content safety taxonomies usually miss. As the industry moves quickly toward adopting #GenAI, these kinds of domain-specific safeguards are crucial for building #trustworthyAI systems that are safe and reliable. For more information: https://lnkd.in/etepdVvz

Explore categories