Humata reposted this
Excited to share that Humata Thinking Mode is now live at app.humata.ai! This major upgrade makes Humata dramatically better at answering your toughest questions—especially for graphs, charts, and tables. I built Thinking Mode with two key upgrades: - Vision: Our LLM now “sees” documents as full-page images, not just paragraphs of text. - Thinking: Before replying, our LLM reasons about the document to deliver smarter, more accurate answers. Just like always, every answer includes precise citations—down to the exact line in the document. More broadly, chatting with your documents is leveling up by improving how the AI sees and thinks about the documents: - Text → Images - Paragraphs → Pages - Document Fragments → Full-document - No Reasoning → Reasoning It is great to see the field evolving as AI gets better at answering questions about documents. Two years ago, the state of the art was sending a few paragraphs of poorly parsed text to an LLM and hoping for the best. Now, multi-modal LLMs with huge contexts can see a PDF as a series of images and reason on those images. The result is much more accurate answers. Try Thinking Mode out and let me know what you think!