Tiger Lake is now in public beta for scale and enterprise users. Finally, a real data loop between Postgres and your lakehouse. Tiger Lake is a native Postgres-lakehouse bridge for real-time, analytical, and agentic systems. No more stitching together Kafka, Flink, and custom glue code. Tiger Lake creates continuous sync between Postgres and Apache Iceberg on S3, built directly into Tiger Cloud. It streams any Postgres table to Iceberg via CDC, and can replicate existing large tables from Postgres to Iceberg via optimized backfill transfers. No need to choose between operational speed and analytical depth. With Tiger Lake, you get both in one architecture. Details: https://lnkd.in/e98mbXfK
Tiger Data (creators of TimescaleDB)
Software Development
New York, NY 18,636 followers
The fastest PostgreSQL cloud for time series, real-time analytics, and vector workloads. Creators of TimescaleDB
About us
Tiger Data is addressing one of the largest challenges (and opportunities) in databases for years to come: helping developers, businesses, and society make sense of the data that humans and their machines are generating in copious amounts. Tiger Data is the fastest PostgreSQL cloud platform that natively supports full-SQL, combining the power, reliability, and ease-of-use of a relational database with the scalability typically seen in NoSQL systems. It is built on PostgreSQL and optimized for fast ingest and complex queries. Tiger Data is deployed for powering mission-critical applications, including industrial data analysis, complex monitoring systems, operational data warehousing, financial risk management, and geospatial asset tracking across industries as varied as manufacturing, space, utilities, oil & gas, logistics, mining, ad tech, finance, telecom, and more. Tiger Data is backed by NEA, Benchmark, Icon Ventures, Redpoint Ventures, Two Sigma Ventures, and Tiger Global. Documentation: https://docs.tigerdata.com/ GitHub: https://github.com/timescale/timescaledb Twitter: https://x.com/TimescaleDB
- Website
-
https://www.tigerdata.com/
External link for Tiger Data (creators of TimescaleDB)
- Industry
- Software Development
- Company size
- 51-200 employees
- Headquarters
- New York, NY
- Type
- Privately Held
- Founded
- 2015
Locations
-
Primary
Get directions
335 Madison Ave
Floor 5
New York, NY 10017, US
Employees at Tiger Data (creators of TimescaleDB)
Updates
-
Tiger Data (creators of TimescaleDB) reposted this
Last month, we shipped our biggest AI launch yet, which is the new MCP server in Tiger CLI. With such a big product launch, it's easy to miss things. So I went ahead and shared with you five features you're probably not using with the new Tiger MCP, but really should! 1. Let your AI manage your databases Run `tiger mcp install` and your AI assistant can create services, check connections, and run queries without you leaving your editor. 2. Instant database forks Zero-copy clones in seconds. You can fearlessly and easily test migrations against real data, then delete when you're done. The first anxiety-free Postgres platform for agents. 3. Docs search built-in Your coding agent can search Postgres docs (versions 14-18) and TimescaleDB docs directly. No more browser tabs. Always up-to-date. 4. Skills that teach your AI Postgres best practices Alongside Claude's launch of Skills, we also created our own after distiling 30 years of Postgres know-how. Schema design, hypertable setup, compression policies. The stuff that prevents painful migrations later. 5. Run SQL through your AI Ask "how many events came in today?" and get the answer. No need to remember SQL connection strings. All within your coding agent. If you're using Tiger Cloud, all of this is already available: tiger mcp install Pick your editor (Claude Code, Cursor, VS Code) and you're set. Full blog: https://lnkd.in/e4tbxEHF
-
Tiger Data (creators of TimescaleDB) reposted this
If your AI agents aren’t running on Tiger Data (creators of TimescaleDB), you’re burning $36,000 to $180,000/year. Here's why you need to fix it today. Most AI founders think their biggest bottleneck is: → "models" → "prompt quality" → "agent orchestration" Nope. It’s your database layer. And here’s the part nobody notices until it’s too late: AI agents run 10×-100× faster than your storage. But your storage hasn’t changed since 2015. If you’re still using EBS, Persistent Disks, or Managed Disks: → Every fork = a full clone → Every modification = slow → Large DBs = hours of waiting → Every clone = full storage billing → Parallel agents = exponential cost So while your agents are sprinting, Your infra is dragging them by the ankle. Here’s the problem in simple English: Let’s say you provision 1TB on AWS. You only use 200GB. You still pay for 1TB. Now let your AI agents do what they do best: → Fork databases → Spin up isolated test environments → Create parallel versions → Build 3-5 features at once → Run multiple test flows Suddenly you're paying for: 3TB… 5TB… 8TB… even if you're only using a few hundred GB. And speed? EBS volume modifications can take hours. Your agents aren’t the problem. Your storage system is. TigerData fixes this with two breakthroughs: 1. Zero-Copy Forks → Forks in seconds → No duplication → All copies share the same underlying storage 2. Copy-on-Write → Only new changes create new data → Huge shared baseline → 70-90% storage savings This means: ✓ Your agents can spin up isolated databases instantly ✓ You only pay for the differences, not full clones ✓ Parallel agents finally become practical ✓ You stop burning money without knowing it This is how agentic development was meant to work. If your product relies on AI agents, You can’t afford slow or expensive infrastructure. If you want my exact setup + workflow: Comment "Tiger" and I’ll DM it to you. (must be connected)
-
Tiger Data (creators of TimescaleDB) reposted this
✨TimescaleDB 2.24 feature teardown: Lightning-fast columnar compaction TimescaleDB brings an optimized columnstore to Postgres, but one that still looks like a standard table. Under the hood, a hypertable is partitioned into chunks: the most recent data typically in row format, the rest in compressed columnar form. Query planning hides all of this, however, and users simply see a table. Unlike many columnstores, ours is fully (and transactionally) mutable. Inserts, updates, upserts, and deletes all work efficiently. Each columnstore chunk is paired with a tiny interim rowstore: new mutations land there, and queries read from both transparently. Over time, these changes are written back to the columnstore as larger, well-ordered batches. And in high-ingest workloads, Direct Compress writes compressed batches directly to the columnstore, bypassing these interim rowstores entirely. Both write paths can produce many small columnar batches, especially when a segmentby key creates separate batches per tenant, device, or id. These batches are correct, but suboptimal for locality and scan efficiency. Compaction merges and compacts these fragmented batches into larger, better-organized segments. In 2.24, this now happens fully in-memory (respecting maintenance_work_mem), avoiding the older disk-based sorting/segmentation path. The result: 4-5x faster compaction far lower I/O, cleaner physical layout, better locality, and faster queries. All while behaving like a single Postgres table. 🐯🚀 Tiger Data (creators of TimescaleDB)
-
-
Tiger Data (creators of TimescaleDB) reposted this
✨ TimescaleDB 2.24 feature teardown: Direct Compress just got smarter (and faster) When we introduced Direct Compress in 2.21, the idea was straightforward: instead of writing data into the row store and later converting it to compressed columnar format, perform that transformation on the ingestion path. For high-ingest workloads, we compute a compressed batch at write time and persist it directly into the column store as a single transaction. In 2.24, we’ve extended this to work seamlessly with hypertables that maintain continuous aggregates. TimescaleDB has always collected invalidation ranges in memory during a transaction and flushed them at commit. These ranges identify which pre-computed aggregates must be refreshed in response to the new data. This release extends that same mechanism to Direct Compress, ensuring that invalidation ranges reflect the new compressed batches being built during ingestion. The result is that users can now combine two of TimescaleDB’s most important capabilities -- the hypercore columnar engine and continuous aggregates -- while retaining the benefits of Direct Compress: lower write amplification, meaningfully faster ingest under load, and smaller WAL footprints with fewer IOPS. We’re building the best #Postgres. Now even faster and easier. 🐯🚀 Tiger Data (creators of TimescaleDB)
-
Tiger Data (creators of TimescaleDB) reposted this
Back in SF after a very productive (and genuinely fun) AWS re:Invent. Some of my takeaways: 𝗔𝗴𝗲𝗻𝘁𝘀, 𝗮𝗴𝗲𝗻𝘁𝘀, 𝗮𝗴𝗲𝗻𝘁𝘀. Last year it was all about “AI.” Seven years ago it was “big data.” What’s interesting to me is that it all feels like the continuation of the same underlying trend. Agents couldn’t exist without Gen AI, which couldn’t exist without big data. 𝗟𝗲𝘀𝘀 𝗵𝘆𝗽𝗲, 𝗺𝗼𝗿𝗲 𝘀𝘂𝗯𝘀𝘁𝗮𝗻𝗰𝗲. Compared to previous years, it felt like there were more serious players and fewer unproven startups. Maybe that’s the AWS audience skewing more mature. Maybe it’s the industry as a whole maturing. 𝗡𝗼 𝗰𝗿𝗮𝘇𝘆 𝗮𝗻𝗻𝗼𝘂𝗻𝗰𝗲𝗺𝗲𝗻𝘁𝘀. In past years, you’d have to brace for whatever giant AWS launch might existentially threaten your whole business. This year I saw nothing like that. Which honestly made the whole thing feel calmer and more pragmatic. 𝗔𝗪𝗦 𝗶𝘀 𝗯𝗲𝗰𝗼𝗺𝗶𝗻𝗴 𝗹𝗲𝗴𝗶𝘁𝗶𝗺𝗮𝘁𝗲𝗹𝘆 𝗽𝗮𝗿𝘁𝗻𝗲𝗿-𝗳𝗿𝗶𝗲𝗻𝗱𝗹𝘆. This is a big shift from what I saw 7+ years ago. The internal comp changes, the incentives, the co-selling, all of it now pushes AWS teams to actually work closely with partners on real “better together” stories. It’s a very different vibe, and in a good way. Kudos to everyone there making that happen. 𝗔𝗹𝘀𝗼: 𝗶𝘁 𝗱𝗲𝗳𝗶𝗻𝗶𝘁𝗲𝗹𝘆 𝗳𝗲𝗲𝗹𝘀 𝗹𝗶𝗸𝗲 𝗵𝗮𝗹𝗳 𝗼𝗳 𝗦𝗙 𝗲𝗻𝗱𝘀 𝘂𝗽 𝗮𝘁 𝗿𝗲:𝗜𝗻𝘃𝗲𝗻𝘁. It’s always a little funny to fly 1.5 hours to meet people who technically live down the street. But somehow it works. The environment forces the collisions that don’t happen at home. Met some great people, reconnected with folks from previous years, and had some genuinely interesting conversations overall. 𝗔𝗻𝗱 𝗳𝗶𝗻𝗮𝗹𝗹𝘆, 𝘁𝗵𝗲 𝗧𝗶𝗴𝗲𝗿 𝗗𝗮𝘁𝗮 𝘁𝗲𝗮𝗺 𝗰𝗿𝘂𝘀𝗵𝗲𝗱 𝗶𝘁. Best booth we’ve done, great materials, nonstop meetings, fun evening events, and of course, top-tier SWAG. I’ve never been prouder of how we showed up. Now back to work! Tiger Data (creators of TimescaleDB) AWS re:Invent Amazon Web Services (AWS)
-
-
-
-
-
+2
-
-
🚀 That’s a wrap on AWS re:Invent! Our booth was buzzing with energy from start to finish. It was great catching up with customers, partners, and friends - old and new. Huge thanks to everyone who stopped by, joined our talks, and attended our happy hour! We loved connecting with you all and sharing how Tiger Data is the leading time-series Postgres solution for real-time, analytical, and agentic applications. See you at the next one! 👀
-
-
Why can't agents find the right docs, and how does Postgres fix it? This was the topic of the latest talk by Tiger Data (creators of TimescaleDB)'s own DevRel 🤖 Jacky Liang at AI Dev 25 x NYC, where he showcases why AI agents struggle to find the correct documentation and how to fix this using PostgreSQL: https://lnkd.in/giXDTbWc
AI Dev 25 x NYC | Jacky Liang: Why Agents Can't Find the Right Docs (And How Postgres Fixes It)
https://www.youtube.com/
-
How Flogistix by Flowco cut infra management costs by 66% with Tiger Data Flogistix by Flowco uses streaming data to optimize vapor recovery and maximize well performance. But their tech stack couldn’t keep up. They were stuck with: ▪️ Fractured mix of technologies ▪️ Limited expertise on fine-tuning performance ▪️ Lagging regulatory follow-ups ▪️ Delayed field service insights ▪️ Exponential growth in data They needed a system that: ▪️ Their team could run with existing Postgres/SQL skills ▪️ Scaled cleanly for time-series data ▪️ Exceeded customer SLAs ▪️ Gave field techs real-time insights on-demand Enter Tiger Data, Flogistix eliminated their worst pain points around cost and query latency, and saw: ▪️ 66% cost savings on infrastructure and storage ▪️ 84% data compression in production ▪️ Uptime jumping from ~95% to “Well above 99%” If you’re running data-heavy industrial workloads and feel your stack straining…this case study is worth a read. 👇 🔗 https://lnkd.in/gmHPmDKb #TigerData #TimescaleDB #Flowco #Flogistix #NoSQL