Press "Enter" to skip to content

Category: Data Types

Oracle Errors: Snapshot Too Old and LOB Columns

David Fitzjarrell tackles a pair of errors:

One of the few errors taht strikes fear in the heart of a DBA is the dreaded:

ORA-01555 snapshot too old
and
ORA-22924 snapshot too old

Of course there are plenty of blogs instructing the DBA to simply “increase the undo_retention”, and there are cases where this works as expected. However, LOBs can be different as two different mechanisms exist for undo management. A LOB column can be configured to use retention to manage before mages of the data, but that can be confusing as each LOB column MAY have its own retention setting. The DBA_LOBS view reports whether LOG column uses retention or pctversion to manage undo, and the associated setting being used. Let’s -dig into this a bit deeper.

Read on to learn more about how LOB retention works, the types of issues you can run into with it, and how to correct those issues.

Comments closed

An Overview of Postgres Data Types

Arindam Mondal categorizes various Postgres data types:

This article will show PostgreSQL Data Types with various examples.

Data Types are an important part of a database. It represents values associated with it. Choosing the right data type for a table is one of the most important tasks because it determines the kind of data we want to store in a table. While creating a table you must specify a data type for each column. A column can store a specific type of data, like integer, string, Boolean, floating points, and so on. In this article, we are going to discuss PostgreSQL data types.

The list is quite similar to what’s available in SQL Server, though there are a few differences, such as built-in support for storing network addresses.

Comments closed

Lists and DataFrames in R

Adrian Tam continues a series on core data types in R:

Vectors in R are supposed to be of homogeneous data type. You can use a list as the container if there are mixed data types, such as numbers and strings. The list and data frame are closely related in R. The data frame is probably more useful because it reflects how we usually collect statistics. In this post, you will learn about them. Specifically, you will know:

  • What are lists and data frames in R
  • How to manipulate lists and data frames

Read on to learn more about these two sorts of collections.

Comments closed

Conversion from XML to Target Collation Impossible

Chad Callihan has a mission, should you choose to accept it:

I recently ran into an error for a query that was casting XML to VARCHAR to search for a value. I have a bit of familiarity with the XML for these types of queries, so I was surprised when running the SELECT statement caused this error, which I had never seen before:

Msg 6355 “Conversion of one or more characters from XML to target collation impossible”

Click through for the solution. The real solution? NVARCHAR Everywhere.

Comments closed

Sparse Columns in SQL Server

Chad Callihan occasionally inserts something:

Have you ever maxed out the SQL Server table column limit yet still needed more columns? Hopefully not considering SQL Server has a max limit of 1024 columns per table. But as I found out, it’s possible for someone to reach out and ask for even more. Sparse columns are an option to consider when you can’t get enough. Let’s take a look at what sparse columns are and how they can be used.

Sparse columns have very little utility, except in the most “I don’t think you’re doing it right” scenarios. Still, if you happen to end up in that scenario, there is a way out, though I’d really want to understand the nature of the data in that problem and, knowing just the amount of detail in the scenario that I do, would lean toward storing the data either in an unpivoted fashion (one row per entity * attribute in an EAV-style “‘additional attributes” table) or as a JSON string and let the client sort it out.

Comments closed

Adding Microseconds to ADF Timestamps

Rayis Imayev can’t wait for the next second:

The current addToTime function (https://learn.microsoft.com/en-us/azure/data-factory/control-flow-expression-language-functions#addToTime) in Azure Data Factory (ADF) only supports a specific set of time units ranging from Year to Seconds. Since I needed to increment a timestamp by microseconds, I had to find an alternative solution in ADF. Here are my findings on how to do this using an alternate approach.

Click through for Rayis’s solution to the problem.

Comments closed

Analyzing SQL Server Table Metadata

Barney Lawrence looks at details about a table:

For a while I’ve been building up a script that uses SQL Servers INFORMACTION_SCHEMA.COLUMNS table to create a query that applies aggregates to each column in a table and then formats the results to allow an at a glance summary of the shape of the data inside of it.

I showed some of the techniques used to build this in a session regarding metadata and data warehousing which I co-presented with Emma Dolling and Ruth Pearson at SQLBits. The one comment that I heard the most after this session was “I want that script”. I did promise to make it available more than a few times and so I’ve put together a github repository to hold it and other resources we built for the session.

Barney anticipated the most likely change I’d make while reviewing the script: APPROX_COUNT_DISTINCT() if you’re on SQL Server 2022 and dealing with a large table.

Comments closed

A Critique of XML

Andy Leonard isn’t XML’s biggest fan:

If you are sending me (or some other hapless victim data engineer) lots of data that resides in a stable schema – one in which the number, order, data type, etc. of the columns never change – using XML, I have a question:

Why?

Why are you using XML to transmit this data?

Read the whole thing. My approximate thoughts (because it is fairly early when I’m writing this, so I might have missed something) are:

  1. XML is most useful with an XSLT, a document describing the shape and rules of the XML data. This is a big advantage over CSV, as it helps you retain information on data types, data lengths, and other details which get lost in the comma.
  2. Speaking of which, CSVs run a high risk of needing to use the separator as a native character. The problem is that there is no single right way to indicate that “That comma is a separator, but this comma is just a comma.” Different parsers work differently, and one of my lengthy rants about PolyBase is that it helpfully indicates that you have a quoted delimiter here and helpfully removes it before barfing on the commas inside quotations. There is actually an ANSI standard character for separator which is not supposed to occur in the wild…but how many people actually use it? Especially considering that most tools don’t interpret it correctly, so you lose some of the readability of CSVs in the process.
  3. That said, for stable schemas with a known separator (or at least a known mechanism for differentiating separators from naturally occurring characters), separated values works well.
  4. And that said, Parquet works better, assuming you don’t have a lot of long string columns. If you’re dealing mostly with numeric data, Parquet will compress much better, will retain data types and lengths, and won’t be a repetitious blob of angle brackets. But a lot of tools still don’t support Parquet, which is a downside.
  5. Basically, this is why we can’t have nice things.
1 Comment