Hello. It's Jason from Can't here. I'd like to talk about our new DBT core integration. So we've had a DBT cloud integration for some time, which allows pulling in model metadata from your project. With that core integration, you can now pull it directly from GitHub. So in this case, we're pulling from some GitHub branch, demo two zero three, etc. And if we take a look at a summary of the metadata that we have here, we've got some tables from BigQuery, but also some model source and NUCRE definitions from our dbt project and some information about the commit that this was generated from. Now to walk through some of the features of our core integration, let's investigate a model that we've heard is having some problems in production. So, in this case, we have a database table dim customers and this dbt icon here means that there's a model associated with it. Now we could view this model in GitHub, but what we want to do is look at the results returned by this model, in which case the canvas is a great place to do that. So there's some information about the model here, some information about the entities and to add it to the canvas, we can just click this button and we can also add some additional context by picking some upstream models to include as well. In this case, we'll just add the two parents, the customers that are on the stage view fact tables. So I'll add these models, they appear in the canvas as shells. And one thing to note is when we import these models, the relationships remain intact indicated by the pink arrows here. And also we compile these models on the fly using the raw ginger from GitHub. If we detect there's a relationship between these cells, we'll highlight it and we'll highlight it now between them. Otherwise, when we compile this Jinja, you'll see the compiled SQL underneath. And that's what's actually executed when running these cells. Now, I've been told that the problem in this downstream cell, DIM customers, lies in the lifetime value column. So to get a feel for that, let's pull it in from the sidebar and just using the regular account visualization tools, we can see what's going on. Now I've been told to expect that the values in this column should extend up to the hundreds, but in this case, we're gonna assume, you know, the small ones. So what's going on? Well, I'm not an expert in this model. So what I'm gonna do is ask my colleague Ollie to come in and help me figure out what's going on. So Ollie knows that the problem lies in this model here and it's a relatively complex model. So what we can do is to make some space for ourselves, we can explode it into its constituent CTEs And now Oli thinks that the problem is in this model. So that's very helpful. So let's, zoom in over here and take a look at what's going on. There's immediately a suspicious block of ginger here, involves a filter. If we hover over, we can see that actually we're filtering our amount column to low values. This is maybe not a weird effect. So one way to investigate this is we can use the author completing GATT to see the macro definitions directly in your project. And here, if we read the description, well, clearly we're not using this macro correctly. We're probably filtering out values so that's when they're too low. So let's fix this and increase this value to something much higher. Now when we do this, you'll notice that we recompiled this macro in line. So the assignment filter has changed. We've also re executed that order payment CTE and that's re executed all the downstream cells. So all the original cells in that model and also our downstream DIMM customers model and then also the visualization that we built of the lifetime value column. And this is looking much more like we'd expect. Now, one of the big changes with the dbt core integration is that now we've made the six row model, we understand it, we should commit these straight back to GitHub from the canvas. So when we open this diff view, you can see here that we've picked up a change to this factorless. Sql file. And this is just the cell that we edited and it changed the filter value from ten to a thousand. So simple diff, but one that's important. So let's commit that. When we do this through a GitHub app that you can choose to install on your dbt repositories. And once you've done that, we could use some GitHub. We're gonna do that here, but you'll notice that we picked up the change here. And the thing to do to ensure that everything's up to date is to go back to GitHub. Or fetch the latest state of your repository and we'll re execute dbt and get the new model definitions that way. So that's a quick run through of the new changes in our dbt core integration. I hope you find them useful and I hope you enjoy working with dbt core in Cat.