📣 FiscalNote (NYSE: NOTE) announces the launch of Bill Comparison in PolicyNote, a powerful new capability that enables users to instantly compare legislation, track changes across versions, and identify key differences that shape policy outcomes. “Policy professionals are constantly pressed to understand how legislation is changing, and what it means, in real time,” said Josh Resnik, CEO & President of FiscalNote. “PolicyNote’s new Bill Comparison feature gives them immediate, intuitive visibility into what’s been added, removed, or changed, saving hours of manual review and accelerating decision-making. Our teams continue to deliver high-impact, AI-powered capabilities that help our customers make strategic decisions more quickly, giving them a critical edge in an increasingly complex policy environment.” Read the full release. https://fnlink.co/3Jg8KBE #PolicyNote #FiscalNotable #GovernmentRelations #AI
FiscalNote’s Post
More Relevant Posts
-
📌 As promised, Part 1 — What LLM-based AI are actually good at (and not) in finance Let's start with a simple truth: LLMs are great with text. Numbers? Not so much. But it's not for the reason most people think. I often hear: "LLMs are probabilistic token generators, so they can't do math or run deterministic tasks." That's partly true, but increasingly a solved problem. Today's multi-agent frameworks can separate planning from execution, route arithmetic to deterministic tools, and run independent verifiers. The real question isn't if this problem can be solved, but how efficiently. But even once you fix LLM variability, numbers remain harder to crack than text. Why? Context. With text, semantic meaning is usually embedded locally or explicitly cross-referenced. A contract clause is explained by surrounding sentences. A legal term references a clear definition. Enrich the text chunk with nearby context, throw it into an LLM – you'll generally be happy. Numbers are different. The context that makes a financial number meaningful is often nowhere near the number itself. It might be: - Scattered across different parts of the document - Implied by industry convention rather than stated - Hidden in choices the preparer made about what to include or exclude LLMs can't "see" this context unless you explicitly surface it. A human analyst has built up pattern recognition about where to look for these landmines. AI doesn't have that instinct yet. Let me show you with an example. Suppose you're running a benchmarking task with AI: "Compare S&M spend of three public software firms in Q2'25." Most generic AIs will: - Spot the S&M row and Q2'25 column - Pull the values for each company - Divide by revenue and present a neat comparison table On the surface? Job well done. But any analyst will tell you: this is where the real questions begin. Just because all 3 numbers are labeled "Q2'25 S&M" doesn't mean they're comparable. And here's the scary part: the AI output looks just as confident whether it caught these issues or not. Clean tables hide messy reality. Before I reveal my answers, a quick poll – which hidden context would worry you MOST when comparing S&M across companies? A) Fiscal year misalignment (one company's Q2 ≠ another's Q2) B) Recent M&A distorting the numbers C) Different accounting policies (SBC treatment, commission capitalization, etc.) D) Something else (tell me in comments) (I'll post Part 2 shortly - where I break down my framework for thinking through the main categories of hidden context in financial numbers, how to surface them for AI, and where today's tools still struggle.)
To view or add a comment, sign in
-
We were asked for comments to include in an article by Sifted on the new UK Government blueprint for AI Regulation which was released on Tuesday 21st October. It's clearly very early days and there are lots of details to iron out but the stand out things for me are how we make sure this isn't just window dressing. There's a lot of focus on cross-economic sandboxes and regulation which sound great but they need to be funded and staffed properly. We see this happen with regulators all the time where they're facing off against very well funded industries, and they either don't have the meaningful tools to hold those organisations to account, or they have their best talent poached by the companies they're regulating. It's not all about money of course, there's legislation and leadership but the implementation will be everything. I'm also concerned that we end up in a situation where the framing is AI is the answer, what's the problem. There are some indications that this is heading in the right direction for some things, but there's a real risk this becomes a playground for the big consultancies and the vendors. There are so many meaningful problems that can be solved using all sorts of methods, but if we insist they have to be solved with AI and using the tools the vendors are pushing we're going to exclude all sorts of projects that could give meaningful progress. Underpinning it all, we need to get away from this "how can we use AI" framing and move towards "what are the problems we need to solve", then we can pick the tools. We see it all the time, that the process, or the law or the implementation is the problem, not the technology. Yes there are significant areas where technology can help us in our daily lives but if we set policy by soundbite, we'll end up with a portfolio of vanity projects and very little meaningful progress. Link in the comments - it's behind a hefty paywall unfortunately.
To view or add a comment, sign in
-
-
“AI workslop” is increasingly appearing in news reports. The phrase is based on “AI slop,” a term coined in 2024 by tech journalist Casey Newton to describe low-quality, AI-generated content. Now, in 2025, we have a new term, “AI workslop,” which a recent Forbes article defined as “AI-generated content that masquerades as finished work but fails to meaningfully advance any actual task” (https://lnkd.in/g-q2uq53). A Stanford study recently found that over 40% of US-based full-time employees reported receiving “workslop” in the last month. Roughly half of the people surveyed viewed colleagues who sent workslop as less creative, capable, and reliable than before. Meanwhile, 42% saw them as less trustworthy, and 37% saw that colleague as less intelligent (https://lnkd.in/gT8iEBWj). And as consulting firms face rising questions of how AI will impact their business models, they are learning they must balance the rush to use AI with the dangers of AI workslop. One firm recently issued a partial refund to the federal government of Australia after providing their client with an AI-generated report littered with errors (https://lnkd.in/gyUxF3zW). Organizations and individuals have to learn how to use AI wisely, but AI hype continues to encourage dangerous AI habits. Your clients and your coworkers can see through your AI workslop, and it damages your brand and professional reputation. AI is a tool--it can help summarize information, improve your outputs, and correct your grammar--but using it to produce the work you share with peers and clients is dangerous in different ways: - First (and most obviously), it can be wrong. We've had too many years of experience with AI errors and hallucinations to simply ignore they exist. - Second, the things you share with peers have to be relevant and unique for your context. Sharing some generic advice gleaned from LLMs only goes to show what you do *not* know. - Last, your company has a style, and you have a personal brand. Letting AI tools write your output can damage both by making you sound repetitive and unoriginal. (Imagine what you'd get asking AI to rewrite William Shakespeare, Jane Austen, or Stephen King--the output would change their voice and alter the meaning and value of the text.) You and your company get no points for using AI wrong. The future doesn't belong to the people who know how to write the best prompts or use the most AI tools. It belongs to the brightest people who can combine their experience and knowledge with the benefits AI can bring.
To view or add a comment, sign in
-
If you are an expert in your field, you are probably discovering that AI can shortcut a few things, but is NEVER a substitute for your expert contributions and thorough review in a final product.
Expert in Customer Experience (CX) & Voice of the Customer (VoC) practices. Tracking COVID-19 and its continuing impact on health, the economy & business.
“AI workslop” is increasingly appearing in news reports. The phrase is based on “AI slop,” a term coined in 2024 by tech journalist Casey Newton to describe low-quality, AI-generated content. Now, in 2025, we have a new term, “AI workslop,” which a recent Forbes article defined as “AI-generated content that masquerades as finished work but fails to meaningfully advance any actual task” (https://lnkd.in/g-q2uq53). A Stanford study recently found that over 40% of US-based full-time employees reported receiving “workslop” in the last month. Roughly half of the people surveyed viewed colleagues who sent workslop as less creative, capable, and reliable than before. Meanwhile, 42% saw them as less trustworthy, and 37% saw that colleague as less intelligent (https://lnkd.in/gT8iEBWj). And as consulting firms face rising questions of how AI will impact their business models, they are learning they must balance the rush to use AI with the dangers of AI workslop. One firm recently issued a partial refund to the federal government of Australia after providing their client with an AI-generated report littered with errors (https://lnkd.in/gyUxF3zW). Organizations and individuals have to learn how to use AI wisely, but AI hype continues to encourage dangerous AI habits. Your clients and your coworkers can see through your AI workslop, and it damages your brand and professional reputation. AI is a tool--it can help summarize information, improve your outputs, and correct your grammar--but using it to produce the work you share with peers and clients is dangerous in different ways: - First (and most obviously), it can be wrong. We've had too many years of experience with AI errors and hallucinations to simply ignore they exist. - Second, the things you share with peers have to be relevant and unique for your context. Sharing some generic advice gleaned from LLMs only goes to show what you do *not* know. - Last, your company has a style, and you have a personal brand. Letting AI tools write your output can damage both by making you sound repetitive and unoriginal. (Imagine what you'd get asking AI to rewrite William Shakespeare, Jane Austen, or Stephen King--the output would change their voice and alter the meaning and value of the text.) You and your company get no points for using AI wrong. The future doesn't belong to the people who know how to write the best prompts or use the most AI tools. It belongs to the brightest people who can combine their experience and knowledge with the benefits AI can bring.
To view or add a comment, sign in
-
'' The future doesn't belong to the people who know how to write the best prompts or use the most AI tools. It belongs to the brightest people who can combine their experience and knowledge with the benefits AI can bring''
Expert in Customer Experience (CX) & Voice of the Customer (VoC) practices. Tracking COVID-19 and its continuing impact on health, the economy & business.
“AI workslop” is increasingly appearing in news reports. The phrase is based on “AI slop,” a term coined in 2024 by tech journalist Casey Newton to describe low-quality, AI-generated content. Now, in 2025, we have a new term, “AI workslop,” which a recent Forbes article defined as “AI-generated content that masquerades as finished work but fails to meaningfully advance any actual task” (https://lnkd.in/g-q2uq53). A Stanford study recently found that over 40% of US-based full-time employees reported receiving “workslop” in the last month. Roughly half of the people surveyed viewed colleagues who sent workslop as less creative, capable, and reliable than before. Meanwhile, 42% saw them as less trustworthy, and 37% saw that colleague as less intelligent (https://lnkd.in/gT8iEBWj). And as consulting firms face rising questions of how AI will impact their business models, they are learning they must balance the rush to use AI with the dangers of AI workslop. One firm recently issued a partial refund to the federal government of Australia after providing their client with an AI-generated report littered with errors (https://lnkd.in/gyUxF3zW). Organizations and individuals have to learn how to use AI wisely, but AI hype continues to encourage dangerous AI habits. Your clients and your coworkers can see through your AI workslop, and it damages your brand and professional reputation. AI is a tool--it can help summarize information, improve your outputs, and correct your grammar--but using it to produce the work you share with peers and clients is dangerous in different ways: - First (and most obviously), it can be wrong. We've had too many years of experience with AI errors and hallucinations to simply ignore they exist. - Second, the things you share with peers have to be relevant and unique for your context. Sharing some generic advice gleaned from LLMs only goes to show what you do *not* know. - Last, your company has a style, and you have a personal brand. Letting AI tools write your output can damage both by making you sound repetitive and unoriginal. (Imagine what you'd get asking AI to rewrite William Shakespeare, Jane Austen, or Stephen King--the output would change their voice and alter the meaning and value of the text.) You and your company get no points for using AI wrong. The future doesn't belong to the people who know how to write the best prompts or use the most AI tools. It belongs to the brightest people who can combine their experience and knowledge with the benefits AI can bring.
To view or add a comment, sign in
-
I recently wrote a piece for IFA Magazine sharing my perspective on AI is transforming financial advice. AI isn’t about replacing advisers; it’s about empowering them. When used well, AI can free advisers from repetitive admin tasks, boost efficiency, and allow more time for what matters most: building strong client relationships. Those who adapt will thrive. Those who don’t risk being left behind. Read the full articles here 👉 https://hubs.li/Q03M4Hpg0 #AI #FinancialAdvice #AIinFinancialAdvice
To view or add a comment, sign in
-
Europe has a rulebook for AI. The question is whether we will apply it with enough conviction to earn trust and competitiveness at the same time. A new essay at Tech Policy Press argues that technological sovereignty depends on demand for reliable European AI, plus consistent enforcement of the rules that protect citizens and deployers. I agree, and I would add one practical point from the field. Most risk and most value appear where models meet platforms, workflows and users. This is where governance should start. For teams that want to move from principles to decisions, here is what works in practice. Begin with a focused EU AI Act readiness check that names an accountable owner, maps documentation, and makes the option of non-deployment explicit when benefits do not clear the threshold. Add a system map of model-platform interactions to find amplification loops before they bite. Test domain-specific reliability and hallucination rates, not in the abstract, rather against the actual exposure that could damage users or reputation. This is how I work with clients today, usually in two to three weeks, using a compact scorecard for hybrid systems and a hallucination risk index tied to real reach. The aim is a decision you can defend, a mitigation plan you can execute, and a narrative your stakeholders understand. If you lead AI adoption and want a one-page checklist or a sample scorecard, send me a note. #EUAIAct #AIGovernance #DigitalSovereignty #RiskManagement #ApplyAI Article link: https://lnkd.in/eXa7mgXG
To view or add a comment, sign in
-
You Already Know Reviews Matter. Here’s What’s Changing Most established firms understand that strong Google reviews build trust and drive leads. But what’s shifting fast is how those reviews are being read and interpreted. Large language models (LLMs) now scan reviews across the web, not just Google but also Avvo, FindLaw, and Justia, to evaluate credibility, tone, and experience. The words your clients use are becoming the data that shapes how AI describes your firm to potential clients. • 88% of consumers read Google reviews before engaging a local business. • 77% of legal consumers say reviews directly influence their choice of law firm. • Google’s AI Overviews appear in roughly 18% of searches and pull heavily from review content. This means directory reviews are gaining new weight in how AI summarizes and ranks firms. But Google reviews remain the foundation because they are the most visible and frequently indexed signals across both traditional and AI search. For firms already investing in reviews, the next frontier is reach and quality. Maintain your Google cadence but expand your reputation footprint across major legal directories. The AI systems shaping client decisions are reading everything, and every review now contributes to your authority online.
To view or add a comment, sign in
-
This week has been heads down in some critical Insight+ releases around the importance of context so scratched down some thoughts these have provoked. #searchgeek #IAbeforeAI 1. Why context matters in domain specialist search Ambiguity of terms: In law, tax and professional services, the same word or phrase can mean very different things depending on the matter, jurisdiction, or client. Context helps resolve this ambiguity. Relevance to task: A lawyer searching for "termination" could mean employment contract termination, termination of parental rights, or termination of a lease. The system needs surrounding context (client, jurisdiction, matter type) to deliver the right results. Efficiency: Professionals don’t want a pile of documents — they want the right evidence, document, precedent, or a section in seconds. Context-rich search narrows the field. 2. Combining full text and structured data Full text documents and emails carry substance — contracts, pleadings, memos, email negotiations. They hold nuance, argument, and reasoning. Structured business data (client, matter number, jurisdiction, case type) provides metadata and relationships. When you combine them: You can answer richer questions like: · “Show me all contracts where indemnity was negotiated for this client in the energy sector.” · “Find emails related to discovery disputes in this matter and link them to the relevant court filings.” Search is no longer just about keyword matching — it’s about situating results in the legal, business, and client context. 3. Beyond classic retrieval Classic retrieval: Find documents containing keywords (still a critical use case) Discovery: Broader search to explore topics or patterns. With context: · Systems can rank/filter relevance not only by keyword frequency, but by fit to the client/matter/jurisdiction at hand. · They can surface past work or push central knowledge · They enable workflow-aware recommendations 4. Foundation for AI Training & grounding AI: High-quality domain-specific AI needs both language (from text/emails) and structure (from metadata, entities, client-matter hierarchies). Contextual grounding: Without structured context, AI outputs risk hallucination or irrelevance. With it, AI can: · Draft with reference to client-specific history. · Provide answers that respect jurisdiction, timeline, and legal constraints. Trust & explainability: Context allows AI to cite sources (e.g., “this clause comes from a contract for the same client, same jurisdiction, 2019”). That builds lawyer trust. 5. Why this matters Context transforms domain specialist search from finding documents to delivering knowledge in the right legal, business, and client setting. Combining unstructured text with structured business data not only improves retrieval and discovery but also lays the essential groundwork for trustworthy, explainable, and workflow-relevant AI assistance.
To view or add a comment, sign in
More from this author
-
Political Tech Summit Recap: Navigating Modern Elections with Technology
FiscalNote 8mo -
Election-Year Playbook, AI for Advocacy and Government Affairs, 2024 Election Watch + More
FiscalNote 1y -
China Affairs, Tips for Building a Go-To-Market Strategy, Understanding the Inner Workings of Congress + More
FiscalNote 1y
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development