Timothy Smith

Timothy Smith

Timothy Smith

Tim manages hundreds of SQL Server and MongoDB instances, and focuses primarily on designing the appropriate architecture for the business model.

He has spent a decade working in FinTech, along with a few years in BioTech and Energy Tech.He hosts the West Texas SQL Server Users' Group, as well as teaches courses and writes articles on SQL Server, ETL, and PowerShell.

In his free time, he is a contributor to the decentralized financial industry.

View all posts by Timothy Smith
Timothy Smith
We can regenerate keys in the Azure Portal for our Azure Cosmos DB

Getting and Updating Connection Information for Azure Cosmos DB

May 24, 2019 by

After we set up our Azure Cosmos DB, we may want to get, add to, or update existing properties. We may use some of the get functionality that PowerShell provides to dynamically save values to encrypted configuration files or tables that we use for application purposes and this functionality could be added to the creation of the Cosmos database account, or a separate step in addition to the creation. In secure contexts, this ensures security without the properties after passing through human eyes since they are saved directly to an encrypted location. In the same manner, we may want to regenerate the keys for the account and save the connection strings with the new keys.

Read more »
The options we see when we select the download template for automation

Creating and Removing Azure Cosmos DBs with PowerShell

May 21, 2019 by

When managing Azure Cosmos DB, we can use the Azure portal and create resources through the interface or use the command line in the portal and create resources. PowerShell also supports some functionality for creating and managing these resources, which can help development teams automate the creation of these databases for quick creations, unit and security tests, removals if the resources aren’t required following the tests. We can also use these scripts for creating templates that we may use in multi-scaling creations (like databases in a group designed for horizontal scale). Generally, in one-off situations, the Azure Portal will suffice for deployments if there is a cost to develop automation that is not required. In this tip, we’ll look at the process of creating a blank and removing the same Azure Cosmos DB.

Read more »
By querying Azure cost information, we can share with development and improve to reduce costs

Extract Azure Costs Using PowerShell

May 20, 2019 by

With strong organization and design for our development teams, cloud infrastructure and security considerations, we’ll now extract Azure cost information that we can share with our organization. In addition, we will see that we can retain this information if needed to track growth (or reduction) in costs. This step is important as it will allow our teams to have an insight into their development and it will also be another audit we can use on the security side to catch unusual growth (or significant reductions) in resource costs that may be the result of an attacker. Our ultimate goal with tracking these costs and sharing them with teams is to improve our development and possibly re-organize it as needed, giving us the ability to further reduce our spending.

Read more »
Compare auditing Azure costs using self-auditng on the same App Service versus an independent server auditing

User Security and Risks to Azure Costs

May 16, 2019 by

We’ve looked at both the organization and development side of managing Azure costs. One risk we have is attackers who compromise an account and mis-scale resources (such as scaling up), driving up our costs. Another scenario is attackers scaling resources too low that affects client’s ability to do their work (performance problems) – a separate risk that may result in lower costs on the cloud side, but higher costs against our reputation. A third risk is reconnaissance of our Azure use: this allows the attackers to get information about our design and later make a wide range of attacks that will appear as normal to us – in this case, Azure costs may be only one of the impacts with other impacts being as severe.

Read more »
We can also add tags without scripts by selecting the tag option under the Azure asset.

Controlling Azure Costs Using Scaling and Tags

May 13, 2019 by

Depending on our design and security, we can create functions or use built-in tools to control our Azure costs. In some contexts, we may look at the overall cost of what tools we’re using, which the Azure portal conveniently shows. Applying what we’ve looked at on the organization and development level, we can organize resources on their design (or ad hoc, as we’ll see) along with creating scripts that control our scale for situations where we may want higher or lower scale. We’ll look at both of these scenarios and how they can help us in both organization and development contexts.

Read more »

Securely Working with Invoke-SqlCmd

March 26, 2019 by

We have a convenient tool for working with PowerShell and SQL Server when using Invoke-SqlCmd. As we saw when running statements, we can run DDL and DML changes with the command without writing our own custom scripts. This carries advantages when we need to quickly develop with PowerShell, but it can come with drawbacks on security if we’re not careful how we use this function. We’ll look at security when using this function by starting with a few examples of what we can do when we have unlimited access along with how we can design to limit our environment to be strict with our use of this tool.

Read more »
Our CSV file returns with the Export-CSV function that catches the output from the select

Working with PowerShell’s Invoke-SqlCmd

March 18, 2019 by

PowerShell features many one-line commands for working with SQL Server, one of which is Invoke-SqlCmd. This tool can be useful in many development contexts where we need to quickly execute scripts or test code and it helps to know some of the parameters we’ll often use. In addition, because we may want a custom script using some of the underlying .NET objects, we’ll look at an alternative where we will be able to create a custom PowerShell script that connects to SQL Server in order to run commands. The latter can be useful because one-line scripts have a tendency to change in future versions of PowerShell and working with the library directly can sometimes avoid this challenge.

Read more »

DevSecOps: Security Testing Around Builds and Shared Information

March 11, 2019 by

One big component of DevSecOps surrounding security testing involves how we build and deploy with shared information access of details that may be valuable to an attacker. In order to understand these risks, we must think like attacker who wants to compromise an environment that is focusing on quickly writing code, building the code, testing it, and deploying it across multiple environments. An attacker’s ultimate target will be the highest environment – often a production environment, but some attackers may target lower environments because they may be able to inject code that is deployed to production. In addition, an attacker may only be trying to learn about how an environment is laid out to attack in other ways, such as using social engineering with key information.

Read more »

DevSecOps: Developing with Automated Security Testing

March 6, 2019 by

A key component of DevSecOps and identical to running unit tests to validate code after a build, running automated security testing after an application has been deployed (such as automated penetration tests) can provide us with a tool that identifies security risks. As we’ve seen recently, there’s been a growth of many companies experiencing information being compromised and with the development culture of “move fast and break things”, I expect this trend of successful attacks will continue. Before we look at our options for automating this testing, we want to be aware of its limits, evaluate the requirements, and consider common designs that are useful.

Read more »

Getting Started with Subdocuments in Azure Cosmos DB

February 22, 2019 by

As we’ve worked with Azure Cosmos DB, we’ve seen that we can store data within fields and the fields of each document don’t always have to match – though we still want some organization for querying. The fields and values storage becomes useful when working with object-oriented languages as these fields can be keys that we use with values that we extract as properties. For an example, the below PowerShell line creates a JSON document in an object and we can see that we can extract the values of these keys in the JSON object.

Read more »

Applying Field Operators and Objects in Azure Cosmos DB

February 15, 2019 by

Since we will sometimes require removing documents in Azure Cosmos DB, we’ll want to be able to specify the documents for removal. In some cases, this will be as simple as specifying a field for removal, such as removing one type of workout in our temporary database we’ve created. In other delete situations, we’ll want to remove if the value of the field isn’t what we expect – such as greater than what we want. This applies to updates as well – we may want to drill into a specific value range for an update. In this tip, we’ll look at using operators with strings, numeric types and dates.

Read more »

Updating and Querying Details in Azure Cosmos DB

February 12, 2019 by

In the first part of this series Getting Started with Azure Cosmos DB, we looked at using Azure Cosmos DB for saving an individual’s fitness routine and why this database structure is better for this data than a SQL database while also showing that we still have to organize our structure like a file system organizes files. In this part of our series, we’ll begin looking at the terminology translation between NoSQL and SQL along with running updates for our documents and queries with filters that return some fields in our document, but not other fields.

Getting Started with Azure Cosmos DB

February 11, 2019 by

In the past two years, we’ve seen an explosion in growth with document-oriented databases like Azure Cosmos DB. MongoDB – one of the major document databases – went live on the Nasdaq and attracted some attention in the past year as well. While more developers are using the document structure for some appropriate data models, less than 10 years ago, some in the industry were predicting that document databases were unnecessary and wouldn’t last because all data could be flattened to fit the SQL model. I took the opposite approach, being an early adopter of MongoDB along with continuing to use SQL databases as I saw opportunities in both SQL and NoSQL for various data structures. While some data do fit the SQL model and SQL will continue to exist, some data are best for document databases, like Azure Cosmos DB. In this series, we’ll be looking at the why and how of document databases.

Image 4

CTEs in SQL Server; Using Common Table Expressions To Solve Rebasing an Identifier Column

January 31, 2019 by

Since we know that the SQL CTE (common table expression) offers us a tool to group and order data in SQL Server, we will see an applied example of using common table expressions to solve the business challenge of re-basing identifier columns. We can think of the business problem like the following: we have a table of foods that we sell with a unique identifier integer associated with the food. As we sell new foods, we insert the food in our list. After a few years, we observe that many of our queries involve the foods grouped alphabetically. However, our food list is just a list of foods that we add to as needed without any grouping. Rather than re-group or re-order through queries using a SQL CTE or subquery, we want to permanently update the identifier.

Read more »

Inserts and Updates with CTEs in SQL Server (Common Table Expressions)

January 28, 2019 by

In CTEs in SQL Server; Querying Common Table Expressions the first article of this series, we looked at creating common table expressions for select statements to help us organize data. This can be useful in aggregates, partition-based selections from within data, or for calculations where ordering data within groups can help us. We also saw that we weren’t required to explicitly create a table an insert data, but we did have to ensure that we had names for each of the columns along with the names being unique. Now, we’ll use our select statements for inserts and updates.

Read more »

CTEs in SQL Server; Querying Common Table Expressions

January 25, 2019 by

Common table expressions (CTEs) in SQL Server provide us with a tool that allows us to design and organize queries in ways that may allow faster development, troubleshooting, and improve performance. In the first part of this series, we’ll look at querying against these with a practice data set. From examples of wrapped query checks to organization of data to multiple structured queries, we’ll see how many options we have with this tool and where it may be useful when we query data.

Read more »

SQL Bulk Insert Concurrency and Performance Considerations

January 18, 2019 by

One of the challenges we face when using SQL bulk insert from files flat can be concurrency and performance challenges, especially if the load involves a multi-step data flow, where we can’t execute a latter step until we finish with an early step. We also see these optimization challenges with constraints as well, as fewer steps to complete a data flow results in saved time, but possibly less accurate data.

Read more »

Working With Line Numbers and Errors Using Bulk Insert

January 10, 2019 by

In the first part of reviewing the basics of bulk insert, we looked at importing entire files, specifying delimiters for rows and columns, and bypassing error messages. Sometimes we’ll want to skip first and ending lines, log errors and bad records for review after inserting data, and work with data types directly without first importing using a varchar and converting to the data type later. In this part, we look at these techniques using T-SQL’s native bulk insert.

Tracking Times In Data Flows for Finding Performance Issues

January 9, 2019 by

We’re facing a challenge with several of our data flows that use more time than they have in the past and we’re not sure when this trend started. We know in the past month, our reports have been delayed by over a day from the start to the finish. For some of our data flows we use SQL Server Agent that calls SSIS packages or procedures, while some of them use a custom data import and reporting application we’ve created. How can we track the length of time for these data flows, since we’re using a combination of tools for importing data?

T-SQL’s Native Bulk Insert Basics

January 8, 2019 by

From troubleshooting many data flow applications designed by others, I’ve seen a common pattern of over complexity with many designs. Putting aside possible risks by introducing too much complexity, troubleshooting these designs often involves opening many different applications – from a notepad file, to SSIS, to SQL Server Management Studio, to a script tool, etc. It may sound like many of these are doing a hundred steps, yet many times, they’re simply importing data from a file, or calling five stored procedures and then a file task of moving a file. This complexity is often unnecessary, as is opening many different tools when we can use a few tools and solve issues faster.

Read more »