As we have also seen in the previous blog posts, our Neural Network consists of a
tf.Graph()contains all of the computational steps required for the Neural Network, and the
tf.Sessionis used to execute these steps.
The computational steps defined in the
tf.Graphcan be divided into four main parts;
We initialize placeholders which are filled with batches of training data during the run.
We define the RNN model and to calculate the output values (logits)
The logits are used to calculate a loss value, which then
is used in an Optimizer to optimize the weights of the RNN.
As a lazy casual, I’ll probably stick with letting Keras do most of the heavy lifting.
extractoris an object that has an
unapplymethod. It takes an object as an input and gives back arguments. Custom extractors are created using the unapply method. The
unapplymethod is called
extractorbecause it takes an element of the same set and extracts some of its parts,
applymethod also called injection acts as a constructor, takes some arguments and yields an element of a given set.
Click through for explanatory examples.
First, let’s define a few terms, so we can see how to detect whether we’re making good use of our indexes, as they relate to the queries running in our SQL Server.
- Whenever you submit a query to SQL Server, if it includes a JOIN and/or WHERE clause, that constitutes a row filtering pattern known as a predicate.
- The query optimizer can use that to estimate how to best retrieve only the intended rows, after that predicate has been applied, this surfaces in the query plan as the Estimated Number of Rows.
- When that estimated plan is executed, and you look at the actual execution plan, this surfaces as the Actual Number of Rows. Usually, a big difference between Estimated and Actual number of rows indicates a misestimation that may need to be addressed to improve performance: maybe you don’t have the right indexes in place?
These are the two properties related to rows you had on every SQL Server plan up to SQL Server 2014.
Read on to learn how predicate pushdown can make queries faster.
The premise is simple: it will generate a series of DROP and then CREATE INDEX commands for every index. The process is a little more complex in practice, but at a high level it:
- Creates a special schema to house a temporary object,
- Creates a special stored procedure to run the code,
- Calls said stored procedure,
- Generates a bunch of PRINT statements that serve as the output (along with new line support for readability),
- Cleans up the stored procedure it generated,
- And finally deletes the schema it created.
Click through for the script, as well as a bonus Powershell script. Because hey, it’s only six lines of code.
At least they used to be, before I built the command that started it all: Start-DbaMigration. Start-DbaMigration is an instance to instance migration command that migrates just about everything. It’s really a wrapper that simplifies nearly 30 other copy commands, including Copy-DbaDatabase, Copy-DbaLogin, and Copy-DbaSqlServerAgent.
Also a bonus shout out to dbachecks.
For the data file, the impact can be illustrated in the following chain of events:
- A new 1MB data file is created that contains no information. (ie. a 1MB data file containing 0MB of data)
- Data is written to the data until it reaches the file size. (ie. the 1MB data file now contains 1MB of data)
- The SQL server suspends normal operations to the database while the data file is grown by 1MB. (ie. the data file is now 2MB and contains 1MB of data) If Instant File Initialization (IFI) is enabled, the file is expanded and database operations resume. If IFI is not enabled, the expanded part of the data file must be zeroed before db operations resume, resulting in an additional delay.
- Once the data file has been grown successfully, the server resumes normal database processing. At this point the server loops back to Step 2.
The server will continue this run-pause-run-pause processing until the data file reaches its Maxsize, or the disk becomes full. If the disk that the data file resides on has other files on it (ie. the C drive, or a disk that is shared by several databases), there will be other disk write events happening between the data file growth events. This may cause the data file expansion segments to be non-contiguous, increasing the file fragmentation and further decreasing the database performance.
This is all to answer the question, “What’s the problem with missing a few log backups?”
Useful information it provides at table level:
- tableType, to identify HEAP tables
- row_count, to identify tables with plenty of rows or now rows at all
- TotalSpaceMB, to identify big tables in size
- LastUserAccess, to identify tables that are not used
- TotalUserAccess, to identify tables that are heavily used
- TableTriggers, to identify tables that have triggers
Useful information it provides at column level:
DataType-Size, to identify supersized, incorrect or deprecated data types
Identity, to identify identity columns
Mandatory-DefaultValue, to identify NULL/NOT NULL columns or with default constraints
PrimaryKey, to identify primary key columns
Collation, to identify columns that might have different collation from the database
ForeignKey-ReferencedColumn, to identify foreign keys and the table.column they reference
Click through for the script.
This has been an issue for sometime until now. I found the following link that help me install SQL Server on the latest Ubuntu 18.04:
But, there are few missing steps which can help ease the burden of errors. At the same time, the information is a little out-dated.
But, it works with the following adjustments.
Please Understand!! This is NOT approved by Microsoft. Use this method for Test Only!!
I’m waiting somewhat impatiently for Microsoft and Hortonworks to support Ubuntu 18.04.