Hey, it's Mitra from Counts and in this video I'm going to show you some of the settings and techniques you can use to make your canvases more efficient. So by considering some of this best practice you should be able to improve the speed of your queries and also reduce the number of queries that are going back to your database. So to start let's look at some of the canvas settings that you can apply. One way to ensure the query load on your database is reduced is to enable caching. If I click on my canvas to ensure nothing is selected, in the right hand design panel, I will see that I can change my caching settings. I have three options here for caching. If I never cache results, this will mean that every time I open my report or canvas, queries for visible cells will be sent back to my database. But maybe I don't need my data to be refreshed as often as that, in which case I can choose to cache results for a certain amount of time. It could be an hour, a day, twice a day, or I could look to cache results indefinitely. So this is the most efficient option. This will mean that queries will not rerun unless you edit cells or unless you execute them manually using the run cell button. And you can also use caching in combination with scheduled runs. So perhaps you want to schedule your run daily very early in the morning, let's say five AM, and this will ensure that data is already run and refreshed in time for any reporting that is looked out that morning. Okay. Next, let's move on to manual cell execution. There are two options for this. We can change the setting across your cold canvas, or we can do this for specific cells. Here I have some queries running against my database, and perhaps I want to make some edits. And I know that I'm going to be clicking in and out of these as I make edits. And I don't particularly want them to be hitting my database again and again while I do this work. So with nothing selected, I can see in the right hand panel that we have an option to execute cells automatically, and I can turn this off. If, for example, then I edit my query, I'll just add in another column. We will see that we have a warning that the cell has not been executed. It'll tell us roughly how much memory that will take when it does run, and it gives us the option to run it manually. As we know, when we change a cell, everything downstream also updates. So we can see now that the two cells downstream also have this message. So we can reverse these. So it can be really useful to just stop everything running in your canvas until you've done the work, and then you can turn it back on again. But we also have the option to toggle this for individual cells. So if I click on my cell in this drop down, I can disable cell execution just for this cell. So now if I come and edit this, we will see that I'm receiving the same message for this cell, but the cells downstream are updating. So this can be useful if you need more granular control and if there's certain cells maybe returning larger results that you would like to toggle off at certain times. So next we're going to look at using local queries. So using counts. Db database alongside your own is a really powerful way to unlock the freedom to do extensive analysis on your data without worrying about the query load going back to your database. And there are several ways to make use of this account. So we have dot DB in the browser, and this is where we switch database cells over to local cells that make use of memory in the browser. The benefit of this is that they run really fast in the canvas, but the one limitation is that the memory limit is just over one hundred and twenty megabytes. Usually, there's some aggregation needed within your database cells to get your data into smaller chunks before you can then switch over to using local cells. Next is DuctDB on the server. This works in a similar way, but the memory limit is vastly increased. So you can hold up thirty two gigabytes in counts local dot DB database on the server. The exact limit depends on the plan that you were on, but this means that you can have less initial database sales and switch over to local sales much sooner. And finally, we have dot DB in count metrics. This is our semantic layer, and caching for this is available at a workspace level. So you will notice that dot DB in the browser and the server works, within individual canvases where you need to start with at least one database cell to then switch over. But with our semantic layer, we cache at a workspace level, and then multiple canvases can make use of this. So this can massively decrease your database query load. So I'll demonstrate how to make use of dot d b in the browser. Here I have a number of cells all going back to my database. We're creating two different tables here. I'm then joining them and then doing subsequent analysis and a visual. So if I keep the two database cells that are looking at my tables, I can highlight the remainder, and then I can come over to the source drop down, and I can just switch this to local dot t b. So now everything downstream of these two database cells are being run within counts local database and not going back to your your own database. I will also demonstrate how to make use of Dutdb on the server. So as I mentioned, the memory limit for this is vastly higher than Dutdb in the browser. Just to demonstrate what this looks like, we have a database cell here, and I am limiting the query to a hundred thousand rows. By default, we limit the number that come into the browser to ten thousand. And then we have this little message in the corner that says that rows are limited, and we've only retrieved the first ten thousand rows. This means when I have subsequently added a local dot DB cell that we have a warning here. Dot DB in the browser only works if you have managed to download everything initially into the browser. What we can do is if we go back to our parent database cell, we now have the option in the design bar to change this limit from ten thousand to unlimited. We specify a browser download limit, which by default here is ten thousand. And now we can see that this has changed down here. We are now showing ten thousand of one hundred thousand rows. When I click on this message, we see confirmation that all one hundred thousand rows have been extracted from your database into account servers. So now when I look at my local dot DB cell, we no longer have that warning, and we are using that increased memory allowance in the server to do our analysis. Next, I'm going to look at some techniques we can employ to reduce query loads. So the first two relate to our approach to coding. The first one is to reduce the amount of data returned. So this could be avoiding select star statements where possible or to include filters as early as possible and really just to ensure that you're only ever returning the data you're actually going to use. The second one is to consider the efficiency of your code. So common issues can be inefficient joins, like self joins and cross joins. We would recommend avoiding where possible. The next two are techniques that you can employ within count. So the first one is to compile DAGs to fewer cells. So I have an example here where I have done some analysis. I have a couple of branches of analysis and some visuals at the end. The power of count is that you can build up your analysis through using individual CTEs, which brings many benefits. But it might be that at the end of a piece of work, you don't need to retain all these individual cells, and we can compile these back into fewer cells, which will then reduce the number of queries going back to your database if these are database cells. To do that, you can go to a cell downstream. You can right click on this and go to copy as copy as SQL. And this will show the compiled SQL that's being run from everything upstream of it. You can copy this, and you can paste it into new cells, edit it as you wish, and you can come up with something that looks like this. So I've gone from nine different queries going back to my database, and I've reduced that down to five. Next, we're going to look at using hidden frames. So frames can be released for for lots of reasons. We can select it from the toolbar. We can drag it over our work, and we can retitle it. The benefit of hidden frames is that when cells are hidden in a frame, they will not run unless there is a cell outside that depends on it. So if we have done some analysis, particularly some self contained analysis that we no longer really need. It doesn't really need to be updated, but we want to keep it somewhere that's accessible. We might want to put it in a frame and then hide it. We can do it on the design bar or by selecting the frame and finding the icon here. This is now hidden and it won't rerun because it doesn't have any dependencies elsewhere in the canvas. So that can be a really nice way of organizing your data well and also reducing the number of queries that are going back to your database. Next, let's look at how your database is set up. So out with count, it's worth considering your raw tables and whether these can be improved. So perhaps this is through indexing and partitioning or maybe by creating more aggregated tables for use in count. And then within count, for some databases, there are advanced database settings. And here, you can change the maximum number of simultaneous connections between count and your database, and you can also change your time out setting. So you might consider reducing this time out setting so that it allows queries to fail faster. So next, I'll touch on monitoring our current usage. If you're on an enterprise plan, we provide you access to a telemetry database, which details your usage account. So by setting up some reporting on this, you can identify canvases that might benefit from some of the techniques I've talked about in this video. Those that aren't using caching or those that have particularly high number of queries running, for example. You can also make use of our alerts feature so that this information is regularly sent out to the relevant person or team that's responsible for this monitoring. And then finally, we'll come on to some other troubleshooting ideas. If you find that performance has got to a point where you just need to cancel your queries, then you can do this at several levels. You can do this at a cell level. So if I click on my cell, I go to my drop down, you will see if this query was running, it would give me the option to cancel this query. I can also cancel at a Canvas level by using the logo drop down and coming down to the data menu, and I can cancel all pending queries. And finally, I can also control this at a workspace level. So within your workspace, if you were to click on your connection and come down to the query section, you will see a number of pending queries. If you have some that are stuck running. You click see all. You will see a list of pending queries. You can select one or more of them, and an option will come up to cancel those. So that's a few ways, which you can unlock yourself if you find yourself in a situation where you need to cancel them. And finally, just to consider if there's any other tool outside of current running queries. So it could be that if queries are running slowly, it's because there is activity from elsewhere running on your database, and that is what is slowing everything down. Okay. So I hope that has been a useful overview of the considerations you can make to improve the performance of your analysis and count and to reduce the query load on your database. There's more information on everything I've discussed here, which can be found in our documentation. A quick way to access this is to use the question mark in the corner of your screen and to click on help and support, which will open the docs in a new window. Thank you.