Hey, guys. This is Ollie, the CTO account. In this video, I'm gonna share with you a new feature which is DuckDV on the server. I'm gonna show you how to take advantage of it and what it can do for you. Okay. So we've got here some taxi data from BigQuery. We're just selecting the first million rows. But as you can see by default count only extracts up to ten thousand rows for each cell from your database. Now this is just to make sure queries run as quickly as possible and doesn't really make any difference, if all of your cells were attached to the database. So here for example, I've got a downstream cell that references the upstream one, and just a simple select count star and as you can see we get a million rows, rather than ten thousand. And that's because, this is represented as a CTE in this query rather than us actually use those ten thousand rows, directly. As I'm sure you know though, you can also run downstream cells, in DuckDV. This has the benefit of being very quick and it also reduces the amount of pressure count applies to your database. As you can see though here, up until now this has been limited by the amount of data that's been available in the browser. So if I switch this back to to BigQuery, you can see this curve is much smoother and the y axis is much larger. And this is just a sample of basically one percent of those those results. That's been this is up until now been a real limitation of DuckDV. But DuckDV on the server allows you to go beyond that. Now in order to leverage that, you wanna go to the upstream cell, go to the sidebar, and you'll see here there's this option, which is row limit, which as I said earlier is by default ten thousand rows. You can remove that entirely. And in that case, we go back to the database and rerun the query, and then extract all of the rows that the database returns. So you can see here now there's a a million rows that have been extracted into count servers. And there's a new option which is which controls how many rows are downloaded from count servers into your browser. So by default, that's also ten thousand, although that could be changed to be whatever you want and it can be removed entirely. Importantly, if you change that number, it doesn't have any impact on the database. We don't go back to the database to rerun the query. We just go back to count servers to ask for more or fewer rows. Okay. Also now what you can see is if you've got fewer than the total results are in the, browser, is that we get accurate results and that's because DuckDV now is running on the server. Now count works out where to run DuckDV. If all the data is available in the browser, it will run DuckDV in the browser, just to reduce latency. If it's not, but we have cached results on the server with more data, we'll run DuckDV there. And now you don't need to worry about that. It will give you the same result either way. And as you can see here, this select count star now gives us a million as we'd expect. This visual looks the same if I run it in DuckDV or in BigQuery. Now one thing to note about this is that it relies on caching. So if you don't have caching set, there's no results on our servers for DuckDV to be able to analyze. So I'd recommend setting caching to be an hour a day or whatever, which you can do here in order to take advantage of this. And I'd also recommend, in terms of how you set up your canvases, running fewer queries, fewer cells attached to your database and extracting as much as possible from them into account servers and then doing as much as you can, in DuckDV. You can any any type of cell could be run-in DuckDV, SQL cell, visual, low low code cell. It all works it works fine. And, hopefully, if you do that, you'll find speeds markedly improve and the pressure on your database and potentially compute cost, is is the impact of that is much is much smaller. The one final thing to note is that if you use Python, Python runs solely in the browser at the moment. So if you wanna run Python, you need accurate results, then you need to make sure any upstream cells that the Python cells reference, you download everything from them into the browser. Okay. Well, I hope this this helps with, the speed of your analysis and the pressure on your database. I'd love to hear your feedback. I hope you enjoy.