Content provided by Voice of the DBA. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Voice of the DBA or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App Go offline with the Player FM app!
Welcome to "The Dead of Night," where darkness falls, and the chilling tales of murder unfold. Join us as we delve into the shadows, exploring the harrowing mysteries that lurk in the silence of the night.
Content provided by Voice of the DBA. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Voice of the DBA or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Content provided by Voice of the DBA. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Voice of the DBA or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Are you looking forward to SQL Server 2025? Or perhaps you think this is just another release, or perhaps you are not looking for new features or capabilities in your environment. Maybe you don’t care about new things, but are looking for enhancements to features introduced in 2017/2019/2022. There is certainly no shortage of things that can be improved from previous versions ( cough graph *cough). I ran across an article on the five things that one person is looking forward to in SQL Server 2025 . It’s a good list, and the things included make me consider an upgrade. Certainly, any improvements in the performance area, especially with all the investments made in Intelligent Query Processing over the last few versions, are worth evaluating. They might help your workload, or they might not, but if they do, then upgrade. However, test, test, test. I can’t stress that enough. Test with your workload, not some random queries. Spend some time setting up WorkloadTools or find some other way to replay a set of queries from multiple clients to see if performance improves. It’s far too easy to look at a query in isolation and make a snap decision. With a load, sometimes performance looks different. The HA improvements are also enticing, especially the idea of offloading backups more easily. Of course, this means you need to ensure you can and know how to, restore a complex set of backups in an emergency situation. Distributed systems are complex, and backups from multiple nodes (remember, you might get unexpected failovers) are a distributed system. Make sure you consolidate those, and plan for potential disruptions if your backup system/share/location is gone. Local backups are always nice, but Murphy’s law might cause you problems in multiple ways with multiple nodes and backups moving across them. Again, test, test, test, and consider weird situations taking place. They will occur, and you should ensure your staff has a simple way to deal with them. We’ve had a few SQL Server versions that leaped forward. SQL Server 2005 changed the paradigm, and I think SQL Server 2016 was another time of dramatic growth. Will SQL Server 2025 be one of those versions, or is it one that has a few incremental improvements? Let me know your thoughts today. Steve Jones Listen to the podcast at Libsyn, Spotify , or iTunes . Note, podcasts are only available for a limited time online.…
I had to make a few changes to a SQL Saturday event recently. The repo is public , and some of the organizers submit PRs for their changes, and others send me an email/message/text/etc. for a change. In this case, an organizer just asked for a couple of image updates to their site. I opened VS Code, created a branch, added a URL for the images, and submitted my own PR. After the build, I deployed it. And it didn’t work. I had a broken image. I checked the URL in code and realized I had “events” copied before the URL, which wasn’t valid. Ok, edit the URL to be correct and repeat: new PR, build, merge, deploy. And it didn’t work. I was looking at the code live on the site, the code in the repo, and I was trying to reconcile paths and file names and keys and values and a few other things. I realized the world for a developer hadn’t changed a lot, and in fact, I was in the age-old loop: deploy, patch, patch the patch, fix the patch for the patch, and so on. I don’t even know that I could have gotten better here with testing, as these were one-off data changes that affected the site for users. If I enter the wrong data, it’s wrong. I can’t easily test for this. I have written code that was wrong, and a few simple tests would have caught my issues. I’ve also written code that isn’t easy to test. If I am adding or changing data, it’s hard to test that. Often, I might do some copy/pasting between the code and the test to generate the test. If I’ve typoed something, the typo continues through the test (in some cases). Even using a code generator or an AI to produce the INSERT or UPDATE code might not solve the problem. They might read my typos in a prompt. One of the best things to help code quality in the last few decades is continuous integration (CI), where we have automated systems that compile code, test it, and run it. It’s not perfect, but it does help reduce the silly mistakes many of us likely make every day when writing code. These can’t prevent typos and issues, but if we are testing intermediate systems, hopefully somewhere along the way, a human or AI agent tries to verify that the things we were typing exist and can catch a typo. In this case, I had to find where I’d mistyped the line and realized that I had the path wrong. The image was in a subfolder and I needed to add that to the img url. Working with data is hard, and it’s a constant source of simple mistakes. I don’t know we’ll ever get away from patching the patch when data manipulation is involved. Steve Jones Listen to the podcast at Libsyn, Spotify , or iTunes . Note, podcasts are only available for a limited time online.…
When talking about DevOps, the goal is to produce better software over time. Both better quality as well as a smoother process of getting bits to your clients. There are a number of metrics typically used to measure how well a software team is performing, and one of the things is Change fail percentage. This is the percentage of deployments that causes a failure in production, which means a hotfix or rollback is needed. Essentially we need to fail forward or roll back to get things working. For most people, a failed deployment means downtime. I’ve caused a service to be down (or a page or an app) because of a code change I made. This includes the database, as a schema change could cause the application to fail. Maybe we’ve renamed something (always a bad idea) and the app hasn’t updated. Maybe we added a new column to a table and some other code has an insert statement without a column list that won’t run. There are any number of database changes that might require a hotfix or rollback and could be considered a failure. However, some people see an expanded definition. If a service is degraded (slower), is that a failure? Some people think so. If we change code in a database (or indexes) and see performance slow down. In that case, is this a failed deployment? Customers would think so. Developers might not like this idea, at least not without some sort of SLA that might allow for some things to be a little slower. After all, slow is still working, right? What if I don’t notice a problem? Imagine I add a new table/column, and the app starts accepting data and storing it. What if we are supposed to use this data downstream, and we don’t notice it is being aggregated incorrectly by a process until many days later. Perhaps we’ve performed some manipulation or calculation on our data and the result isn’t what we wanted. It might not be incorrect, but maybe it’s ignoring NULLs when we want NULLs treated as 0s. Is that a failure? If I deploy today and Bob or Sue notices next week that the data isn’t correct, that’s a failure. I don’t know I’d count downtime from today until next week, but from when Bob/Sue files a ticket, the clock starts on calculating the MTTR (mean time to recovery). I don’t often see database deployments failing from the “will it compile on the production server” standpoint. Most code gets tested on at least one other system, and with any sort of process, we catch those simple errors. More often than not, we find performance slowdowns or misunderstood requirements/specifications. In those cases, some of you might consider this a failure and some may not. I suppose it depends on whether these issues get triaged as important enough to fix. While I might have a wide definition of deployment failures for most coding problems, I don’t for a performance slowdown. Far too few people really pay attention to code performance and are happy to let bad code live in their production systems for years. Steve Jones Listen to the podcast at Libsyn, Spotify , or iTunes . Note, podcasts are only available for a limited time online.…
Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.