So thank you for everyone who was already, in here. We have a few people already. I am basically just gonna moderate today. We've got Ollie, who's obviously CEO of Count. We've got, Grace, who is data consultant at Count as well. Both of these are gonna be the main providers of the content here today, and I am just gonna, sit back and take it easy in the, moderation chair. I'm sure there's also hard work to do, Mike. Yeah. Yeah. I'm sure I'll I'm sure I'll find something. Yeah. I'm sure, a lot of you may have questions at certain points. Feel free to put in some of you already started using it. You're ahead of the game. Just feel free to drop, some questions in the chat there. Amanda, hi. David, hi. Yeah. So just any questions at any point during the, presentation and the the the actual webinar, drop in the chat there. I will try and keep a record of them all. And if it makes sense to jump in, and interrupt, then I'll do that. If not, I'll leave it till the end. There's also a point at the end where we can just kinda go through the q and a, from it from everyone as well, and we'll make sure of that. So with that in mind, I will hand it over to, Ollie and Grace, and I will, jump off. We'll probably give it another minute or so just to allow some more people to come in. But, yeah. Ollie and Grace, take it away. Thanks, Mike. Hello, everyone. Nice to see you all. Thank you for joining us. This is a really exciting opportunity for us to talk through account metrics. I think we will just crack through, make sure we have time for questions at the end. In terms of timing, we're very grateful you're going up some of your time. We will aim to make sure we we won't need the full hour that we've opted for this just to give you some comfort, but we will make sure we have time for questions. We're not in a rush If you wanna stick around and ask us more, let me, first off, just introduce ourselves. So, as Mike mentioned, I am Ollie. I'm one of the cofounders of Count, and this is Grace. Hello, Grace. Hey, guys. So Grace is a data consultant, and our customer success team has been working with a lot of our customers already with CountMetrics. So very excited to to have her to support me to actually show you how the whole thing works in more detail. In terms of agenda for today, we're gonna cover a few things. Firstly, we'll introduce you to count. I know that a lot of you here haven't aren't count users yet. You probably don't know necessarily know what what count is. We'll go through that introduction as a foundation to explain, therefore, where count metrics sits. We will also be going through the pros and cons of semantic layers. There was a decision that we made about six months ago to not, adopt, like, an open source proprietary open source separate semantic layer and build our own. That wasn't an easy decision, but we can explain our rationale for why we've done that. And hopefully, that'll help you understand if you're looking at this kind of idea of a a governed data model, how should you be considering this. We'll then walk you through account metrics, show you how that's different, where we've made some big bets in terms of infrastructure, which we think are really quite industry defining. We'll also show it to you. Met Grace will walk us through a demo, and then we'll have time for for q and a. So hopefully, that is an exciting agenda for you. We'll keep this lighthearted as we go. But, yeah, that's the idea. You should leave here fully equipped to know what Count is and what what semantic layer should do, how to help you choose a semantic layer, and see, what we believe is the best in class semantic layer, in production today. So let us keep going. So just to let you know a bit more about Count, we've been Count, we're based in London as you can probably tell from grace and my accent. We although we have, team all over the place, we're a distributed company. As you can see, these are some of our wonderful customers who have bought into the idea that dashboarding isn't the way to drive maximum value from data. We're we're really grateful for them. We have, yeah, we're we're a tech start up, and, our mission is ultimately to turn BI into a thing which is more than dashboards into a tool which drives business improvement. We believe that, BI tools need to do more now than just driving driving metrics and dashboards. So to give you a sense of this, kind of what where we think we are as a as an industry is that data teams increasingly realizing they're in a service trap. That that the realization from data teams that they realize that their ability to drive their drive value is limited by the ability for the wider organization to engage with the work they're doing and actually use the data that they're producing. So in dead instead, the best data teams that we're working with recognize that they're the uniquely placed to actually be the driving force of business improvement, that they have a role to play in helping the business see itself better, provide better clarity, and then help solve the organization's biggest problems and move outside of just being the provider of data and being that service support function that no one became a data person to do. So to give you a sense of then how we think, and this is walking to sort of count's thesis, we believe when we call the the improvement cycle, it has many names. It's a very fundamental principle of driving value from data where you have to be able to identify opportunities and problems, explore, find solutions within the data to tell you and validate your assumptions, make decisions, and monitor the change. And this cycle is happens whether you like it or not. This is the way that data drives value. It's a fundamental principle. You may have seen this in other frameworks like continuous improvement, or if you've used any growth any big any growth structures, you'll you'll see us as a a key frame of any growth or marketing work. The key thing here is that everyone's doing it, but the companies who do it best are the ones who perform best. And our belief, our strong hypothesis as when we built count was that BI tools are actually only good at one section of this this cycle. They're good at monitor. They're good at letting you see numbers in a dashboard, understand what's going on. They do not support the rest of the cycle. And in fact, as an industry, we call this rest of the cycle ad hoc analysis. So if you have a if you talk about we've got too much ad hoc analysis, what you mean is we haven't got a very good con improvement cycle happening. It's all very chaotic, and no one understands what's going on. We're answering random questions. Decisions are made are happening in a very haphazard way. And it may look a bit like this in your organization right now where you have random questions coming from people's thoughts, coming from people seeing something in a dashboard, or there being some degree of self-service exploration which leads to a question to exist in a Slack message somewhere or sent to us a ticket. And then the data team go off and try and explore that in a whole host of different tools, feeding back what they're finding, getting validation, and moving through this before trying to get to some sort of report to try and help the business make a decision, which leads to more feedback. And that ultimately leads to more mess and more reports to monitor what's going on because people's not people aren't really sure. And this whole process is very inefficient, and it's driven by, ultimately, the the BI tool not being fit for purpose when it comes to supporting business improvement. So our belief is that traditional BI tools are not enough, that they drown the business with information, they silo people away from the business context, they they just give you numbers, and they're overly governed or they're overly flexible and cause chaos either way, and they fracture the team's workflow. So what we believe, what our thesis is for BI should be, it's a place to put metrics in context, get bigger picture of what what what actually is going on, help visualize the business. You need a place where you can actually problem solve and work with business context and numbers in the same place, and you need to have that governed layer but also have the flexibility to work with it. And that ultimately means that a data team's role should be to support that cycle of improvement rather than just being an input to the business. So that that is the context of count and what we are about as an organization and our bet on when it comes to to BI. So where does count fit then? Well, count, therefore, to support the improvement cycle, we built it to enable the the full improvement cycle to sit in one platform. We would like to think that BI should stand for business improvement, that you can use count to, map your business, not just your not just your metrics, that you can see how your metrics got contextualized. You can build metric trees. You can build process flow maps to help you see what's going on and see the business more clearly. It's a place where you have the power of a collaborative data notebook with SQL, Python, AI, and low code to help you structure your thinking, lay out your analysis as you're digging into what's going wrong, and do that as a team. And then using the whiteboard element of a canvas to help you collaborate, draw conclusions, make decisions in real time, or asynchronously with comments. And all of that can happen within one that one collaborative workflow. But count is still a bit idle. You can still do all the boring features, as we call it, of dashboarding, permissions, alerts, scheduling, all the things you need to still do the day job of providing data to the business as well. So that is what count is. That's just setting the scene for how we see the world needing to change in the BI space and how we're helping our our customers change their the role of data in their organizations. So the question is, where does semantic layers fit into this? Why are we why does count metrics fall? Well, let's first off, let's, think about what a semantic layer actually is. So the key the key thing to remember about a semantic layer is it's a SQL generating machine. So, obviously, most organizations have data centralized in a data warehouse. And what a semantic layer does is it sits between your reporting layer, a k, all the charts, and and your data warehouse and gives you an abstraction of logic so that you can ask make an API request to say sales for this period split by region. The semantic layer then holds the logic, the the rules that define how that request gets turned into a SQL query to go to the data warehouse, and then the data get before that request gets passed by the semantic layer back to your charts. That is ultimately what a semantic layer is and what it is for. It lets you abstract that logic so it doesn't sit down in the reporting layer. Instead, it sits within a governed defined, semantic layer. So the benefits of semantic layer, if you don't already know just quickly, is it abstracts away complexity. It keeps business logic centralized and version controlled in a kind of middle layer where you can have one definition of sales rather than that definition of sales being across a number of different reports. And it's machine readable, making it great, for using for use in AI or as an API for other organize for other applications to use. This probably, for the audience, is not not new information. What I want to talk to you about and challenge you about when we were looking at how we were gonna support this level of logic in in count is why we chose to to build ourselves rather than adopt a third party. And the answer is that semantic layers are not a silver bullet. There are three big problems with semantic layers that we don't think are being addressed very well. Firstly, performance and cost. Secondly, governance at the cost of flexibility. As as we discussed, we think both are important for driving value from data. And then discoverability challenge discoverability challenges still remain. So firstly, on performance and cost. This is a very simple example, of a a very simple semantic layer. We have here, three database tables. One is it's actually Spotify data. One is tracks. One is genres. And then these join with a one to many relationship to an artist. So artists can have multiple genres and, obviously, can have multiple tracks. So if I ask the question, what's the average song duration by artist split by its genre? So I can see if, Taylor Swift is was, was, had longer songs in her Eris era or whether she had it, it or she had longer time in her other early albums. The SQL that's generated for that query in most semantic layers, and here is the market leading semantic layer, you will I'll leave you to guess which one that is. The SQL can get pretty gnarly. It can get pretty gnarly. This is actually about a ninety five, lines of SQL, which I'll show it to you later. And this SQL blowout, the fact that semantic blares may give you a simplification of logic doesn't mean that what it's thrown into the data warehouse is actually optimized in any way. And it's actually quite common for us to see that that means that semantic layers can reduce down performance, but also can drive up cost. So for example, we in the last month, actually, this started this slide talks to you, I've spoken to a number of different companies as you can imagine, and thirty percent of them spend more on their data warehouse compute than on their BI tooling. What that means is that their BI BI layer and if they have a semantic layer, that as well is driving more cost to sit here than they're spending up here. And a semantic layer used badly can make that problem even worse because it's making you have unoptimized queries hitting Snowflake, whereas at least before, an an analyst can make sure SQL here was optimized, where you're actually removing the ability to optimize your SQL if you let the semantic layer define it. The second challenge is that semantic layer can give you a walled garden. It gives you a very safe space to play if you're an analyst doing or a a business user doing self-service so they can answer very simple questions. But, ultimately, as I'm sure you know already, the most valuable questions the business needs to ask aren't the obvious ones, aren't the ones which are just the average duration of, songs by genre. It's the question that goes beyond that, which might need to go back into the database further or is outside the bounds of what the business thinks, which drives the value. So how we didn't see a way of solving this problem with the proprietary semantic layer. How do you stop people still exporting the query into an environment which they have more flexibility like a an Excel file and actually let and where they actually can explore and interrogate the data where the rigidity of the semantic layer is still causing the data to fragment and the lineage to be lost. Finally, the last thing again is that confusion can still reign. That though you have a governed, metric, which means that sales can be defined clearly, it's still very easy to get lost which sales number actually is the one that matters where the filters aren't applied, where there's still multiple reports saying different versions of sales, which are hard to find. So questions like, have you got time for a quick question? Can you help me find? Where is why is it ten here and six there? Still happen in very governed, to with companies who have a very governed semantic layer in their BI layer. So though it is helpful in terms of locking down logic, it doesn't guarantee the business's understanding will improve. So with those questions left, we made the decision to build our own semantic layer to support the full improvement cycle. So count metrics is a powerful governed data model which provides trusted data at every stage of the count platform. So that means that you can use count metrics as a base layer of metrics that you can use in your identifies or mapping metrics and building metric trees, but it's also available as you're doing more exploratory analysis in the collaborative notebook environment, making decisions as a business, knowing the data you're looking at is the same in your reports as it is in your exploratory analysis. And there are three really big benefits within count metrics that we think compared to the the semantic layer in the market. One is its very powerful modeling framework, which can map work with any kind of, built to work with any kind of data, businesses' data model. It's flexible and governed as we alluded to. And most importantly, back to the whole performance and cost issue, you can run count metrics to reduce your data warehouse cost by eighty percent, and I'll show you how that works in a second. So firstly, when it comes to the flexibility, when you're using count metrics, there are there's a hierarchy of three as you build out your company data model. Firstly, you can have a view, and a view is way a view is where you can define metrics and dimensions off a database table where you can define those metrics and and the aggregations you're doing or the expressions you want, be that a window function, ratios, arbitrary expressions, or year on year comparisons off a database table with your with your dimensions. So you can take a database table and then abstract away the rules which govern how SQL is generated off that view. The second layer is then the dataset, and the dataset is a collection of views, and the dataset defines how these views relate to each other. Now, many semantic layers have limitations on this. In count, these viewed relationships with views can be many to one, one to one, many to many, or any combination of. So you can really define how your fact tables, dimension tables, time series tables map to each other with a with a with a complete flexibility that you need depending on how what's best for your data model and your use case. And then you can have a number of datasets at a catalog level to give you a full overview of your business and define different use cases or different departments as you want. So there's no limitation in count by how how how where you where where the hierarchy of your business matches to the metrics you have. The second thing is on performance, this performance in cost. So this is going back to that example I was showing you before of, looking at Spotify data and just to show you how much more efficient count is at generating SQL than the market leader. So this is the SQL that count generates for that query. There's twenty six lines of SQL here. It's actually as you probably write it yourself, if you're doing this, these joins, it's got some left outer joins here for these two tables to bring them together, into this environment. Whereas as I mentioned before, the market leading semantic layer would generate that query with ninety five lines of SQL. Now ninety five lines in this is seventy minutes less performant, but actually, it is very much so. You can see all the substrings, the casts, the hex the the conversion to to hexadecimal here to try and drive down to a, defined query. All this logic here is it just slows down the performance of that query. In this case, I think it was at least by ten percent, but as the query gets more complicated, that that can escalate. The second, the the the second thing I wanna show you is the flexibility in that caching layer. This is how you can use count to save, costs in your data warehouse. So whereas, most semantic layers sit as a middle ground between your data warehouse and your reporting layer, in count, count metrics are optional. So I can have a the semantic layer build building a view of a data warehouse directly and then passing the logic through to different, different charts in different canvases. But I also have the choice to allow my allow me to schedule the caching of a database table or query from the data warehouse into our Doc DB Analytics engine and then build a view off that. This means that you can rather than every query that you write that builds off the semantic layer hitting your data warehouse, you can choose to have have those queries running on a replica of that table, working in DuckDV in memory, which gives you much faster performance and means that you're not hitting your database again and again for complicated queries. Not only that, you can still also, within count, still access the the actual database tables yourself if you want to, which means you can actually combine the raw data of your in your data warehouse alongside any any any metric definitions you want from Calmetrics into a single canvas or into a single query and benefit from both benefit from the flexibility and the governed state in one place. So this is the architecture diagram of why Calmetrics is such a powerful place to have a governed layer and still give you, the flexibility as well. We had a question come in from David around the join types. So how are we using, Puppini bridge style queries to handle many to many multifactor situations? That's a great question, David. I'm gonna come back to the end about that question. You're diving straight into the detail. Very good. Let's keep going, and we can walk through that, Grace, when we get to the demo. That's quite a helpful question. So the idea of count metrics is it gives you this governed data model with an in memory compute engine to allow you to have metrics across the entire improvement cycle. So you can move from defining your metric maps from the same place that you can build your operational dashboards if you want to build those, and have access to the same definitions as you're doing more exploratory analysis as your problem solving with the business to to come up with better answers and understand what's really going on. And that's ultimately the, the core idea of count metrics. It sits as a a governed layer that you can use wherever you need to to, without being a restricted environment. Great. There you go. That's the introduction to count metrics. I think Grace, we're gonna hand over to you now to go through a bit of a bit of a walk through. Nice. Cool. I will share my screen. So working I guess the best place to start is looking at the final UI and actually what count metrics looks like in action. So within this canvas, we've got an example of a metric tree and an example of a dashboard that were both built using our product catalog in count metrics. So firstly, a metric tree is a really great way to visualize your key like, your main KPIs, in that hierarchical structure and understand how the kind of how the data interacts with each other. So within this canvas, in the left hand sidebar, you can see I've made use of our overviews feature, giving a short description of what's going on in the canvas, but also those global filters, that are within each of those canvases. So this metric tree can be really interactive and collaborative. If I navigate to my data area, from here, you can see I'm connected to my product metrics catalog, the same way that you'd normally see how you're connected to any of your database connections. Beneath that then we have our datasets. So this dataset is named workspace, and here are all of the different views that sit within that dataset. Whilst I'm navigating the, the canvas here, you can see we've made use of lots of different visualizations. If I click on one of those visuals, you can see that visual builder opens up on the left hand side. So count metrics is designed to be stakeholder friendly, for nontechnical users. And because of that, we have a drag and drop functionality. So you can add in pay status as a color breakout, for example. We know that company wide canvases or metric trees like this are really important, to share with stakeholders, but we don't necessarily want to always allow them to make changes to the underlying data itself. So with account with a metric tree like this, if you navigate to the account logo and select lock canvas, by doing that, you'll be able to share this with stakeholders without them changing that underlying data. Another cool feature we've introduced is explore. So whilst navigating a canvas like this, if I see a visual that's interesting to me and I want to explore it further, I can select explore cell, and it will open up into a new canvas, that same visual, and I can continue exploring from there. So, for example, if I pull through page status again, maybe make it a bar chart, you can see it's really collaborative and easy to, easy to use. Just say I've finished my data exploration. I can then return back to the canvas, and you can see it hasn't changed any of that underlying data. If we go back to it and I add in those same breakouts and just say I noticed something that's interesting to me and I might want to kind of continue exploring, you can do that by saving it as a canvas and, saving that in a product, or sandbox environment. So I'll save it to my product project. From here then, you can kind of continue that data exploration the same way you can do in any other kind of count canvas. But from here, you can, add in sticky notes. You can add in, annotations as well if you need to. I love this. I'm gonna I'm I'm only in here with you collaborating, Grace, just to to wind you up. Sorry about that. No. That's great. Let's add in a slide, and then we can then dump that into a nice one slide thing. We found a new thing. Exactly. So it's designed to be super collaborative and interactive. And from there, if you want to then share this new thing, this new exciting thing that you found, you can do that by clicking share and, yeah, directly sharing it with users in your workspace or internal bit or internal teams around the business. If I navigate back to my product, project space then, there's a couple of different ways that you can actually access the catalog data. So you can do so from an existing, canvas like we just did, or another way is by opening up a new canvas. Here you can see I'm connected to my product metrics catalog. Any of your data sources that are connected to that project will appear here. From here then, you can add in visual or table cells. So I'll show you an example of a table cell. Cool. So really quickly, it's really easy to visualize the data, on a row by row level. As the as Calmetrics is designed to be kind of stakeholder friendly and a self serve tool, we have the option of adding in tables or visuals. But, actually, if you're an analyst level user, you have extra capabilities within it. You can query SQL or Python cells directly from a table or a visual cell, which is really great. So you can continue kind of exploring the data, add in extra calculated fields if you need to, and then, continue that data exploration. You can also, if you have analyst level permissions, create calculated fields directly within a visual. Like here, if I wanted to add in a metric, I can do so like here, and it will pull through directly into that table. Thanks, Grace. So what just to clarify this for the audience, because you're an analyst, you actually can create your own arbitrary metrics directly into the semantic layer query, whereas if you're an explorer user, you are stuck with the aggregations that have been predefined unless you build on that separate query off the top of it. Exactly. And in the visual or table builder, you can see I'm connected to my product metrics catalog, and I've got my dataset here. Beneath that, all of the different views and the customizations that I've made within those views. So I'll show you in a minute the catalog builder and how we actually define those views and set it up. But I wanted to show you what it looks like for your stakeholders. So when building out this view, we predefined the date breakouts that we wanna our stakeholders to have available. The same with our aggregates as well to make it really easy to drag and drop those variables in and, explore the data from there. Going back to that project project homes that homepage, another way to quickly access the catalog data is through an explore. So this is really great for a quick one off analysis, and it's really useful for, your stakeholders to be able to begin working with the data, and doing that data exploration on their own and not having to come to you for for every bit of data analysis they're looking for. So let's just go back to we can pull through any of those, different metrics here. They could also pull through visuals and work from different templates that we have available. So in terms of, connecting that catalog and actually being able to access the catalog data from a project space, the way that you do that is the same as connecting any database. So from your project space, you simply select manage data, and here you can connect up the catalogs that you've built, all your database connections, and you can connect multiple if you need to. Then giving access to users is done in the same way here. So through manage access, you can define the roles that you want your workspace users to have. So now I thought it'd be worth showing you how to actually build your catalog and some of the cool features we've included within that. So here is your catalog section in your workspace homepage. And from here, you can either create a new catalog and it will prompt you to connect your database and start building from there, or or I'm gonna show you one of the catalogues that we've built, so this product metrics catalog. Here, you can see all of the views and datasets that sit within that catalog. When it comes to editing the catalog, you select edit catalog here and it will open up this ID for you to then, kind of build out your views and datasets from there. Views are stored in a YAML file like this, and the quickest way to create a view is by selecting plus here and create a view from a table. Here, it'll open up your database connection and you can select your tables. And I'll just bring through product sales like this, and it will also populate that YAML file with all of the fields that sit within that table. So from here, you can then remove fields or customize the YAML as required, and I'll show you how I've done that in my events view. So here I've pulled through different aggregates and, I've added in labels and descriptions. I've also got here that time frames breakout, which is defining those breakout options I want my stakeholders to have. You can pull through, calculated fields here as well. So this YAML allows you to be as governed or flexible as you want when it comes to, how you act like, the final UI for your stakeholders. In the scheme on the right hand side, you can find all of the different kind of customization options. But one of them that is important that I wanted to mention is caching. So I know Ollie mentioned about caching before, but how you actually set that up is here at the view level. So you cache across each view, and by doing so, you're essentially caching your whole catalog so that then users can access the catalog across the whole workspace from any canvas, and it will be pulled off that data that's stored on our duck DB servers. So it's only hitting, so it's gonna be massive savings on your query load because it's not gonna be referencing your database every time. It's just gonna be pulling from that, DuckDB stored data. So that's views, and then datasets are the joins and relationships between views. So within your dataset section, you can pull through any of the views that you've defined above and choose how you want those to relate to each other. So you can see here I've pulled through my workspace view, integrations, events, and they're all joined on a one to many relationship. And this is really great because it allows that symmetrical aggregation, so that when we're kind of pulling through calculated metrics from our views, it's all gonna be symmetrically aggregated. That's actually the answer to David's question, I think. Yes. That's true, actually. So, yes, David, the answer to your question is, yes, we make sure that if you're doing a fan query where you have you're aggregating across two different, one to many relationships, you get the right no duplication, the right aggregates, the answer is yes. Another cool feature that I wanted to, chat quickly about is our validation checker. So once you've actually built out your views and datasets, you have to validate and commit those changes. So I'll do so here. Whenever you do that, it will pop up with this validation checker, which checks any of the canvases that are attached to that catalog, and it will check for any breaking changes that might happen as a result of the changes that you've made. Here, there's no errors, so I'll commit it. And there's also a version control here, so you can revert back to previous versions if needed. Cool. I think that covers pretty much everything I wanted to chat to you on the, demo side of things. That's great. Mhmm. I think, hopefully, for everyone, we we're getting to the point where we wanna make sure we open up for q and a because I'm sure there's some questions, but, hopefully, that was a helpful walk through. Let me just, wrap up with a bit of a share of our road map coming up because I think that will be helpful for a few people to see. As you can see right now, it is already it's already very functional. You can do a huge amount of power in that modeling layer and and flexibility in how you can use that. But here's coming up in our roadmap for q two. So we're gonna enable DBT and external version control integration, so you can actually use DBT or git or GitHub for managing your YAML files. Hot reloading so that you can see before committing a change, you can see exactly how your reports or canvases will look with that change. So you can like, if you're a software developer, you can actually hot reload and understand your changes and their impacts. We're gonna provide staging and branching environments, give you advanced metric relationship modeling. What that means is not just letting you to fund the logic of your metrics, but to help you build metric trees and and help the help the semantic layer define how metrics relate to each other so you can actually, own that in a modeling layer as well and, obviously, then provide a public API access so that you can use count metrics in other tools in other locations as well. So you can still have, multi access to the definitions you're defining. So, hopefully that's an exciting roadmap for people who are looking at account and seeing it or seeing that it's a very powerful way to save cost, give flexibility, and still have that governed layer that everyone loves to have and that version control environment. Great. I think that's pretty much it for our content. I hope that's been helpful for everyone. In terms of, giving it a go, at the end of this webinar, we'll send you the recording. We'll send you access to more materials and documentation so you can evaluate count and understand a bit more depth about how it works and make sure it fits your use case. You also are very easy today. You can go use the semantic layer. You can sign up for a free trial of count, and has all our plans have access to count metrics, so just head to count dot co to go access it yourselves, or you can also, if you choose, book a demo, directly on our website. Oh, there you go. There's a wonderful this is a this is what Michael's doing behind the scenes, apparently, setting up a wonderful CTA field to to use to access, and play around with count, and following with our progress. But I think before we wrap up, I wanna make sure we do have chance to ask any questions. So feel free to hit us up with any questions, if you'd like to hear it. Or feel free if you'd like to drop off. You're welcome to, and we'll send you the materials and you can get cracking. But thank you so much for joining us. Yeah. So we do have a couple of questions, which I will, ask of you now. Either of you feel free to take, to to take it, but we have, one from John, to start with, which is the first question that came in, which is, how does the semantic layer differ from the data modeling layer, for example, DBT? Thank you. Good question. So, obviously, DBT is ultimately a sort of a a data modeling tool which lets you define safe, reliable, sort of data artifacts in your data warehouse. Ultimately, like, it's a way of version controlling the tables that exist in your data warehouse. A a semantic layer is different insofar as it lets you define the rules about which you can query those tables. So it's another layer of abstraction. You can argue that they're doing the same thing, which is to make raw data more usable and more defined. Just that semantic layer does that with a, an API interface and allows you to define relationship between different tables that DBT enables. Cool. Hopefully, that answers the question, John. Another one from David. So, this question is, is advanced metric relationship modeling to enable root cause analysis? Yeah. That's a good that's a good question. I think it ultimately does massively enable, better root cause analysis. It gives you that flexible environment. One of the things that we often find in our customers who have maybe used other semantic layer tools previously is that they find that the semantic layer is too rigid for their use cases, that they can't actually use it for the the the queries that they're writing. So what we found with our customers is by the fact you can do much more advanced modeling relationships, it means it can match you used for a much greater surface area of questions, if that makes sense. You have access to more metrics, more definitions than you can if you only have a a one to one modeling relationship, and that means you can use it for more things and actually can be used as a source of truth for analysts as well as for the wider business. It's one of those things actually I if I'm honest, I honestly don't know how some semantic layers get away without having a one to many relationship map, because it really limits the questions that you can answer and makes it much more rigid and means that you're really unusing those semantic layers to define your reports because you don't expect them to properly explore from there. So, yeah, great question, David. It's exactly for that for that use case. Cool. We actually have a few more come in. So this next one is from Kofi. Do queries to the DuckDV come at additional cost? No. We've actually implemented DuckDV on the server. And just to be clear, DuckDV on the server is a is part of our compute engine That is there whether you use the metrics layer or not. You can use DuckDuty on this, DuckDuty as a kind of a a computational engine. Even if you're just writing in SQL, you can still choose to to import any queries you write to the database into DuckDuty at any point in your workflow. It's just that it's also just an a multiplication power when it comes to the semantic layer. And we've enabled DuckDuty across all our plans, without any additional cost. We haven't increased our prices for it. There is different levels of DuckDuty capacity. So for example, if you're on our Explore plan, you have access to four you can cache table up to four gigabytes in size, which is a for for many b two b businesses, a decent size, and that's just per per table cache, a cash limit. But you can go up to thirty two gigabyte, sort of, cache level for, if you if you wish to on our bigger plan. So there is a there's a tiering there because, obviously, there is more cost that the more you are caching in the, InductDB. But this is a this is a per view well, this is a per table, cash limit. So, rather than just for the catalog as a whole. Cool. Okay. Next one. We have a question from Adi v. How my crossover is the previous one, but we'll ask it anyway. How do queries run while in Canvas assuming Dub DB by data analyst versus in reporting mode by data analysis consumer, and cost implications? Yeah. Good good question. Yeah, the computational, we have a very advanced computational modeling count. You can effectively, be it, through a semantic layer or not, you can run computation in three places. One could be in your data warehouse directly to your data warehouse, or you can run it in DuckDB on the server, or we can choose to rather than use our servers, use, the memory in your browser, in your computer. If it if the if the size of the dataset is less than hundred megabytes, we'll push it to your browser so that you can run it in memory even faster. So we're basically optimizing for primarily for performance here rather than anything else. The queries run-in the canvas by an analyst, is no different really no different to the framework that's used for the consumer. Other other than that in report mode, when it's being used by consumer, we only run the queries that are needed for that particular report. Whereas, obviously, if if an analyst is working in the canvas, then we'll load all the all the this means you're maybe seeing more data at once, and therefore we've loaded more. But there's no, on the underlyingly, there's no difference in the queries that are run. The rules that govern how the canvas works is the same for report and canvas. It's it's the it's the same, mechanism effectively. And the cost implement implement implications are are therefore the same. It depends on how you set up, where you run your queries, and what you've cached at what point. Nice. Okay. Last question, I think, that we have. So anyone else wants any question answering, keep banging them in the chat there, and I'll make sure they get answered. But this is the last one we have at the moment. This is from Mark. Any plans in the future for Dataform integration as well as DBT? We, we are aware of the Dataform. We know it's a very, powerful, if you're using the g if using, the GCP environment, this is obviously a very powerful alternative to to DBT. We have a few requests for it. It's something we're considering. It's not gonna be in q two. So it's as you can see, it was wasn't on our q two road map, but we'll evaluate it again probably at the end of q two when we worked out what we've done there. I think, certainly, when it comes to enabling you to upload a manifest as it were to, to count, like, the actual definitions of your models, that's something we will enable faster, which means you can use, other tools for defining the manifest as it were of your your data warehouse so we can replicate that in count. So there'll be other ways, basically, but if it's not a full implementation of integration of Dataform. Cool. That is the last question I think we have. But, yeah, that is, that's the last one. Any anything else before we wrap up wrap up? Not at all. Yeah. Thank you so much for for sticking around. It's been really fun. Thank you, Grace, for your demo and walk through, and for Mike for press pressing the buttons button. There's a joking, Mike. You've been really good job. But but I pressed them in such a good way. Exactly. Go ahead, Edwin. Oh, so there's another question here. Would we be happy to stick around for Kofi if that's helpful? There's another question. Did you say sorry in there? One from Kofi here about how we Oh, sorry. Yeah. Minimize the changes of a field of view. Yeah. So, Grace, do you wanna pick that up? Actually, that's back to your idea of, like, the validation checks, I guess. Yeah. So the validation checker that we have added will check whenever you commit any chain any new change to a view or your catalog in general. It will check all of the canvases that are connected with that catalog and all of the cells connected with that catalog, and it will pop up with any breaking changes, that that might happen as a result of it. So you'll be made clearly aware of anything that might break as a result of changes to names or changes to calculated fields or anything like that. And on our road map, we actually have that preview mode that Ollie was mentioning earlier so that, actually, it will get to a point that when you are updating your catalog, you'll be able to see in real time those changes that and how they'll affect the canvas, kind of in, like, a dev environment. It's probably worth mentioning that when it comes to it's not as fragile as you think unless you've hard coded a a metric in the in a is your visual cell. We if you've changed the name of a field to a different thing, it doesn't change the fact we know that metric's being mapped to the y axis of certain charts. So we it isn't necessarily it isn't necessarily that a small character name will from suddenly break all your queries where sales is mentioned if you had a typo. It it the name of the field is a thing which is user scene, but the there's a hash key which defines the the the the variable, to the somatic layer. Cool. Thank you for alerting me to the question that was that was that I missed off there because I was looking at the wrong view. Just as I was getting all giddy about my good button pressing, I screwed up. That's great stuff. Cool. Yeah. I think that's, I think that's everything. Any last last, last thoughts, Ollie or Grace? Not at all. If you have any questions at all, if there's any more questions, you know where it's fine. Just just drop us a message or an email or or come to the chat in, in our website. Brilliant. Well, thanks so much for coming on, everyone. And, yeah. We'll hopefully see you soon. Thanks, everyone. Thanks, guys. Bye.