One would expect that by changing the version, the previous behavior would remain the same. Well, it hasn’t. What has changed?
After each process method, a punctuate method is called. After punctuateInterval is scheduled, punctuate also occurs. This means the following:
- In the first test scenario, each “Arrived: message_<offset>” message in the console is accompanied with “Punctuate call”. Unsurprisingly, we have one: “Processed: 1” message in output topic. After ten messages, we have another: “Punctuate call” and “Processed: 0” pair.
- In the second scenario, we have nine: “Arrived: message_<offset>” and “Punctuate call” pairs on the console, followed with 9: “Processed: 1” in the output topic. After the pause and tenth message we have: “Arrived: message_<offset>” and 3 “Punctuate call”. In the output topic, we see “Processed: 1”, “Processed: 0”, and “Processed 0”.
Read the whole thing. This sort of behavioral change can be hard to suss out when testing a streaming application.
I have my model as an input and want to spit it out at the end as well. But when I try that, I get an error:
Msg 39017, Level 16, State 3, Line 239
Input data query returns column #1 of type ‘varbinary(max)’ which is not supported by the runtime for ‘R’ script. Unsupported types are binary, varbinary, timestamp, datetime2, datetimeoffset, time, text, ntext, image, hierarchyid, xml, sql_variant and user-defined type.
So there goes that plan—I can output a VARBINARY(MAX) model, but I cannot input one.
Click through to see my workaround.
After SSMS Version 17.4 was release back in December, SQLPS module is no longer available. So, if you try to use the “Start PowerShell” from any of the database object, you’ll get the message “No SQL Server cmdlets found…” popup message.
And good riddance. Even in 2008, the SQLPS method of dealing with Powershell was obsolete, as Powershell modules were supposed to be snap-ins rather than independent shells. The SQL Server Powershell module is a major improvement in that regard.
Imag[in]e I ask you, would you prefer Apple iPhone over Samsung Galaxy, respectively? Or if I would ask you, would you prefer BMW over Audi, respectively? In all the cases, both phones or both cars will get the job done. So will Python or R, R or Python. So instead of asking which one I prefer, ask your self, which one suits my environment better? If your background is more statistics and less programming, take R, if you are more into programming and less into statistics, take Python; in both cases you will have faster time to accomplish results with your preferred language. If you ask me, can I do gradient boosting or ANOVA or MDS in Python or in R, the answer will be yes, you can do both in any of the languages.
This graf hits the crux of my opinion on the topic, but as I’ve gone deeper into the topic over the past year, I think the correct answer is probably “both” for a mature organization and “pick the one which suits you better” for beginners.
Notice that various options have a colored non-equals icon. Here you can quickly see the various values that are different between the two execution plans.
At the bottom of the execution plans is a Showplan Analysis window. This window has color-coded keys for various sections of the plan:
He also shows how to import and export your SSMS configuration settings. This makes it easier to migrate to a different machine or keep your desktop and laptop looking the same.
When an application connects to a database it takes resources to establish that connection. So rather than doing this over and over again a connection pool is established to handle this functionality and cache connections. There are several issues that can arise if either the pool is not created with the same connection string (fragmentation), or if the connections are simply not closed/disposed of properly.
In the case of fragmentation, each connection string associated with a connection is considered part of 1 connection pool. If you create 2 connection strings with different database names, maxpool values, timeouts, or security then you will in effect create different connection pools. This is much like how query plans get stored in the plan cache. Different white space, capital letters all create different plans.
You can get the .NET pool counts from:
Performance Monitor> .NET data provider for SQL Server > NumberOfActiveConnectionPools
Click through for more information.
Wednesday I walk into the office and immediately hear that CHECKDB is the source of issues on one of the servers and is the reason behind some errors that have been happening. While I don’t think this is the case (it might look like it on the surface but there is something else that is happening that is the actual cause) I also wanted to find out what CHECKDB was running at the time the errors occurred.
I needed information on when CHECKDB ran for each database. When you look for what you can run to find when CHECKDB was last run you find this blog post and also this blog post on grabbing this info. While these were very informative, they were for one database at a time. I need this for all the databases so I can try to not only find out when each one ran, but also use these time stamps to figure out the duration.
The big recommendation I’d make with regard to this is not to use sp_msforeachdb. Otherwise, click through for a good script.
Recently I had a requirement to collate and briefly compare some of the various methods to perform SQL Server backup for databases deployed onto Azure IaaS machines. The purpose was to provide a few options to cater for the different types(OLTP, DW, etc) and sizes (small to big) of databases that could be deployed there.
Up front, I am NOT saying that these are the ONLY options to perform standard SQL backups! I am sure there are others – however – the below are both supported and well documented – which when it comes to something as critical as backups is pretty important.
So the purpose of this blog is to provide a quick and brief list of various SQL backup methods!
Read on for the options.