All posts by Cole Camplese

About Cole Camplese

Vice President for Technology & CIO

From Miscellaneous to Meaningful

Back in 2008, I wrote a post called Should It All Be Miscellaneous? inspired by David Weinberger’s book and the liberating idea that the web didn’t need rigid hierarchies. Tags, links, and search could replace the old drawers and filing cabinets of the physical world. At the time, that felt like progress, why force everything into neat boxes when the web could be sprawling, searchable, and serendipitous?

Why do I bring up a blog post from nearly 20 years ago? I have a new habit of finding a spot on Saturday early afternoon to catch up on work and have lunch. I was standing in line yesterday and I remembered that post and wanted to reflect on how that compared to what we are discovering in our AI journey. The comment section proved worth the rabbit hole.

So here we are in 2025, and I find myself revisiting that question in light of what I wrote recently in Exposing the Missing Pieces in Our Content. Not so much what I wrote, but what the community gave back in the comments. The irony? The very thing that once felt like freedom, letting everything be miscellaneous, has become one of our biggest challenges.

AI has thrown a harsh light on this reality. As Mario said in the comments, “One of the main blockers to unlocking the power of AI is the state of our data and information.” That’s the truth. Our Copilot trainings have surfaced the same theme over and over, the technology is ready, but our content isn’t. Thousands of sites, all managed differently, with redundant information, and with varying levels of accuracy and oversight. It’s not that the web is broken, it’s that our relationship with it hasn’t matured.

I guess that makes some sense, the web as we know it is still relatively a puppy within the higher education governance timeline. Let’s be honest, IT governance is still a work in progress, and it predates the web on our campuses by 30 or so years. Maybe it is no wonder we are still looking for answers?

Cody’s comment stuck with me too: “I don’t think websites are going anywhere.” I agree (even though I poked at him). Websites aren’t disappearing tomorrow. But the way people expect to interact with information is shifting fast. AI agents, chat interfaces, and voice assistants aren’t replacing the web, they’re reframing it. They’re forcing us to ask what is the role of a website when an agent can synthesize answers in seconds? Maybe the answer is harmony, as Cody suggested, agents and websites complementing each other, each doing what they do best.

Valerie and Kristin added another layer: this isn’t just about technology; it’s about stewardship. Kristin’s metaphor hit home: “We don’t build a world-class art museum and ask everyone to drop off the paintings they like most.” Yet that’s how we’ve treated our institutional web for decades, every department spinning up a site, every reorg leaving behind digital fossils. AI is exposing that fragility. And as Kristin said, maybe the CIO has to become the Chief Curator now. I mean content is information after all.

So here we are, nearly 20 years after I asked if everything should be miscellaneous. The answer? It depends. The web still needs flexibility, creativity, and openness. But it also needs anchors, places where truth lives, where information is accurate, current, and trusted. Not because AI demands it (though it does), but because our community deserves it.

AI didn’t create this problem, it is revealing it. And maybe that’s the push we need to finally treat our information more like core infrastructure. Why would that change the equation? IT governance has given us the idea of investing wisely over the lifecycle of systems to ensure they are resilient, robust, and reliable as they are constantly consumed. Yes, the content floats on physical infrastructure, but shouldn’t we value it as much as the switches, cabling, and access points? And just like with managing the lifecycle of infrastructure, it should be governed by the most critical, highest risk, and greatest value creating investments.

It all begs me to ask so many questions that I don’t have answers to. Questions like, if we could only invest in 20–30 primary sites across the university, which ones would make the cut? How do we balance the creative chaos of the open web with the need for authoritative sources that AI (and humans) can trust? Are we ready to think of ourselves not just as technologists or communicators, but as curators of institutional knowledge?

I bet someone out there has a thought or two.

Exposing the Missing Pieces in Our Content

Part of our campus AI journey is to design and deploy AI agents that can utilize key information from exisiting websites across campus. These agents may replace the sites, reducing technical bloat and information drift. While doing so, an unexpected benefit has emerged, one that speaks volumes about the evolving relationship between technology and content strategy on a highly decentralized campus.

When we first set out to build these agents, we did what most teams do, we pointed them at our sites or their underlying data, ingested the knowledge, tested retrieval, and began crafting conversations. But something interesting happened when we put these agents to work. People have started asking for things we couldn’t give them.

In short, the agents began surfacing questions we hadn’t anticipated; questions students, faculty, staff, and prospective Longhorns are likely asking every day. And, just as importantly, they showed us where our data and content fell short.

They have become mirrors, reflecting the structure, and the fragmentation, of our institutional knowledge. The things they cannot answer point directly to gaps in the content architecture: outdated FAQs, scattered documentation, siloed policy pages, and even buried gems of information lost in PDF archives or legacy web systems. It’s not that the information doesn’t exist. It’s that it’s too hard to find, inconsistently written, or lacks the context necessary to form a coherent response. We are sure the agent isn’t making mistakes per say, it tells us what it can’t say, and that’s been incredibly valuable.

One of the more revealing moments for me came when we began evaluating how the agent performed with the A–Z directory. This is a resource that has long served as the backbone for finding services and offices across the university. But once we put the agent to work with this data, the limitations of that system became painfully clear. What we had assumed was structured, complete, and reliable turned out to be limited, outdated, and in some cases, misleading.

UT Spark AI interface showing the A-Z agent.

This has been a bit of a wake-up call. It is so tempting to take a “lift and shift” approach, move what we have on the web into the AI agent and assume it will just work. But that does not hold up. The agent exposes what the web often hides. It forces precision. It requires context. And it absolutely demands trust in the data that fuels it.

We are now integrating these insights into a more systematic approach. Each time a query breaks down, we want to trace it back. What we need to be asking centers on: Should this information exist? If so, where should it live? Can we make it easier to find, easier to understand, and easier for the agent to serve up confidently?

This work is not just about making our AI better. It’s about making our websites more accessible, our documentation more useful, and our services more responsive. Every gap we close improves the experience not just for the agent, but for the human trying to find their way. I didn’t expect this kind of feedback loop to emerge so quickly, but I’m glad it has. It reminds us to slow down, look closely, and be intentional, not just with how we build agents, but how we steward the information we share across this institution.

UT.AI Spark Preview

Most people aren’t aware, but there is something new getting ready to happen on campus. For our team, this is the quiet before the storm right now. Those of us working toward this have a quiet energy about us, the kind that comes with anticipation and a sense that something big is about to unfold. Over the past few months, we’ve been quietly working with teams across Enterprise Technology, ITLC members, Microsoft, and CloudForce to bring a new platform to life. It has been designed to put artificial intelligence directly into the hands of our community. We’re calling it UT.AI Spark, and while it’s still in preview, the excitement is already building as more and more people get a chance to explore what it can do.

What’s interesting about Spark isn’t just the technology (though, yes, it’s powerful and flexible and all the things you’d hope for in a modern AI platform). It’s the way we’re approaching this launch. Instead of flipping a switch and calling it done, we’re inviting people in early, listening closely, and letting the platform grow in response to real needs and real feedback. Our partners at CloudForce are right there with us, each request turning into a, “what if we could create …” conversation to see how we can make it happen. It’s a little bit messy, a little bit experimental, and very much in the spirit of how I prefer to do things: open, transparent, and always focused on what’s actually useful for students, faculty, and staff.

What we plan to release in the next month or so is a stable and robust 1.0 version of our own OpenAI deployment that all of us can use. It means there will be a roadmap articulated so the community can help us go from 1.0 to 1.1 to 1.2 and so on.

Already, early adopters are finding creative ways to use Spark, from analyzing data in new ways, to brainstorming lesson plans, to simply asking better questions, to creating custom agents to do their bidding. And as each new group comes on board, the community around Spark is starting to take shape. There’s a lot of curiosity, a healthy dose of skepticism, and a genuine desire to figure out what responsible, meaningful AI use looks like in a university setting.

UT.AI Spark interface screenshot.

As the fall semester rolls around, everyone at UT will have access. That’s when things will really get interesting. We’ll have workshops, training, and plenty of opportunities for people to share what they’re learning. But even before that, the most important work is happening now: listening, iterating, and building something that feels right for this campus.

If you’re curious, keep an eye out for updates and invitations to try Spark for yourself. And if you’re already part of the preview, thank you for helping shape what comes next. This isn’t just about rolling out another tool, it is about starting a conversation and seeing where it leads. I can’t wait to see what we create together.

Copilot Reflection: First 90 Days

Even in the middle of summer, I’m continually reminded of the energy that pulses through our campus. It’s an energy fueled by curiosity, by a relentless drive to learn, and by a community that believes deeply in the power of innovation. Over the past several months, that energy has found a new outlet through our Microsoft Copilot Initiative—a key pillar in our broader UT.AI strategy.

When we launched the Copilot Initiative, our goal was simple but ambitious: to transform the way we work, collaborate, and solve problems across UT Austin. By integrating Microsoft 365 Copilot tools into our workflows, we set out to empower our staff to reclaim time, enhance productivity, and build the digital fluency that will define the next era of higher education.

The results so far have been impressive, with more to come. More than 1,200 staff members have participated in workshops, webinars, and hands-on labs. One in three participants now reports saving 1–2 hours per day—time they’re reinvesting in creative, strategic work that moves our university forward. Over 90% of our colleagues rated these learning experiences as exceptional or above average. These numbers are impressive, but what excites me most are the stories behind them: staff using Copilot to draft emails, summarize complex documents, organize workflows, and transcribing meetings to more quickly arrive at impactful descion-making. We’re not just adopting new tools—we’re reimagining what’s possible.

Of course, transformation isn’t always easy. We’ve encountered challenges around license allocation, data governance, and the quirks of moving from Box to SharePoint. But these are exactly the kinds of problems that signal real change is underway. They push us to ask better questions, to iterate, and to build solutions together.

What stands out from our interviews and feedback is a hunger for more: more cohort-based learning, more job-specific scenarios, more opportunities to experiment and grow. This is the heart of what makes UT Austin special. We are, at our core, a community of perpetual learners.

Looking ahead, I’m excited for what’s next. Later this month, we’ll gather for our AI Summit Week to share use cases and deepen our engagement. We’re rolling out expanded webinars and train-the-trainer workshops, building the internal capacity we need for sustained, campus-wide adoption. And as we do, we’ll continue to listen, to adapt, and to celebrate the creativity and resilience of our staff.

The Copilot Initiative is just one part of our larger UT.AI vision—a vision where technology is not just a tool, but a catalyst for lifelong learning and a culture of innovation. My hope is that we keep pushing the boundaries, keep asking what’s possible, and keep learning together. Because at UT Austin, the future isn’t something we can wait for. It’s something we build, one experiment, one workshop, one bold idea at a time. Here’s to always learning.

Here is a little summary of what we are experiencing from our post training workshop feedback:

AreaKey Findings
Training Reach1,200+ staff trained across UT Austin
Time Savings33% saved 1–2 hours/day; 56% saved 1–2 hours/week; 11% saved 1–2 hours/month
Satisfaction90%+ rated sessions as exceptional or above average
ProductivityCopilot used for drafting emails, summarizing documents, organizing workflows, project planning
AdoptionHigh demand for continued learning; strong interest in cohort-based and job-specific training
ChallengesLicense allocation, data governance, platform inconsistencies (Box vs. SharePoint)
Cultural ImpactStaff appreciated transparency and the university’s commitment to digital transformation

Strengthening Our Data Strategy: D2I Transitioning to Enterprise Technology

At The University of Texas at Austin, we understand that data is essential for making informed decisions and driving innovation. I’m thrilled to announce that Data to Insights (D2I) will officially start reporting to the Vice President of Technology and Chief Information Officer on May 1, 2025.

This transition will enhance collaboration, align practices, and strengthen our commitment to providing top-notch data solutions for our faculty, staff, and students. By integrating D2I under the VP’s portfolio alongside Enterprise Technology, we’re aiming for a more unified and scalable approach to data governance, analytics, and tech services across the university.

Brian Roberts, Vice Provost for Data to Insights, will take on a special advisor role to the Office fo the CIO during the transition as we continue planning for D2I’s long-term future. Kathryn Flowers will join the CIO senior leadership team will lead the D2I team as the Executive Director, ensuring smooth leadership and execution of D2I’s mission.

Enterprise Technology April 2025 Org chart.

Rest assured, this transition won’t cause any immediate changes to D2I’s ongoing projects or services. I am confident that our teams will work closely to make this change seamless and enhance our ability to deliver value to the university community.

Enterprise Technology and D2I will be partnering with the CFO’s Office, the Provost’s Office, and the COO’s Office to assess and realign the university’s data analytics goals in support of institutional priorities, with an emphasis on scaling adoption of and best practices in interaction with the Data Hub. Over the next six months, these teams will collaborate to develop a comprehensive data analytics strategy, reliant on the centralized Data Hub, to be presented to university leadership, with the aim of implementing it by the FY26–27 fiscal year. This effort will include extensive stakeholder engagement, including interviews and cross-functional collaboration across multiple groups, to ensure the strategy is informed, aligned, and positioned to drive meaningful impact across the university.

Thank you for your continued support as we take this important step in aligning our technology and data strategy with the university’s broader goals. If you have any questions or feedback, please feel free to reach out.