Archive for the ‘Uncategorized’ Category

MSSQL: specifying targetNamespace with FOR XML, XMLSCHEMA

January 23, 2010

You have probably found that although you can specify the namespace to use for serializing to xml with FOR XML, there is no option for specifying the namespace of a generated xmlschema. Instead you get an autoincrementing uri for the users session, e.g. urn:schemas-microsoft-com:sql:SqlRowSet7

You can however assign that generated schema to an xml variable, and then simply use an xml dml to replace the value of the targetNamespace attribute. Here is a simple example, enjoy:

declare @x xml, @ns varchar(1024)
set @ns=’newNamespace’
set @x = (
 select *
 from (
  select top 0 1 as intId
 ) as t for xml auto, xmlschema
)
set @x.modify(‘declare namespace xsd = “http://www.w3.org/2001/XMLSchema“;
replace value of /xsd:schema[1]/@targetNamespace with sql:variable(“@ns”)’)
select @x

Using SQL Server indexed views for time averaged aggregations

June 16, 2009

I found many website games such as quiz systems have a need to report “most popular in last 24 hours” or other such time aggregations. For these high ratios of reads / writes it is usually a good trade off to create an indexed view to maintain some aggregations over time. Sometimes the time slice can be midnight-based 24 hour periods, in other circumstances you need greater granularity.

Here is a simple example of maintaining an aggregated view that is addressable by a given day a minute.

create table timeSeries (
timeSeriesId int identity primary key clustered
,updateDate datetime not null
,payload float not null
)

insert timeSeries values (‘2009-06-16 12:00:00’, rand())
insert timeSeries values (‘2009-06-16 12:00:59’, rand())
insert timeSeries values (‘2009-06-16 12:01:00’, rand())
insert timeSeries values (‘2009-06-16 12:59:00’, rand())
insert timeSeries values (‘2009-06-16 01:00:00’, rand())
insert timeSeries values (‘2009-06-16 1:30:00’, rand())
insert timeSeries values (‘2009-06-16 23:59:00’, rand())
insert timeSeries values (‘2009-06-17 00:01:00’, rand())
insert timeSeries values (‘2009-06-17 00:01:30’, rand())

create view timeSeriesByMinute_IV with schemabinding as
select
dayBucket = datediff(day, 0, updateDate)
,minuteBucket = datediff(minute, 0, (updateDate – datediff(day, 0, updateDate)))
,payloadSum = sum(payLoad)
,numRows = count_big(*)
from dbo.timeSeries
group by
datediff(day, 0, updateDate)
,datediff(minute, 0, (updateDate – datediff(day, 0, updateDate)))
go

create unique clustered index CU_timeSeriesByMinute_IV on timeSeriesByMinute_IV (dayBucket, minuteBucket)
go

create view timeSeriesByMinute as
select
dayBucket
,minuteBucket
,payloadSum
,numRows
,payloadAvg = payloadSum / numRows
from dbo.timeSeriesByMinute_IV with (noexpand)
go

declare @timeLookup datetime, @dayBucket int, @minuteBucket int
select
@timeLookup = ‘2009-06-16 12:00:00’
,@dayBucket = datediff(day, 0, @timeLookup)
,@minuteBucket = datediff(minute, 0, (@timeLookup – datediff(day, 0, @timeLookup)))

select * from timeSeriesByMinute where dayBucket = @dayBucket and minuteBucket = @minuteBucket

If you have the Enterprise sku of SQL Server then the NOEXPAND hint is not required to force an indexed view to make use of its index. Unfortunately the Standard sku will not use the index by default and the NOEXPAND hint is required. To help other developers to never miss this required hint I always implement indexed views with the above base-view / wrapper-view approach. The wrapper view is the only one that folks reference so  it guarantees the NOEXPAND hint is applied.

Parsing date / time in windows shell for log file timestamp

October 6, 2008

Quick post to share a simple solution I came up with to a problem I encountered this morning. A script I inherited needed to be tweaked so that in an error condition the log file is copied to an auditing location. Currently the filename of the log file is static, so this location would be overwritten each time the error condition occurs. Below is a simple solution to reference a filename with a timestamp for these kinds of issues.

It only has resolution to the minute, this is adequate for my needs. To get resolution to seconds or milliseconds you could pipe a CRLF to the ‘time’ command and redirect output to a text file. Then parse the text file instead of the time command directly.
FOR /f "usebackq tokens=2-4 delims=/ " %A in (`date /t`) do (
SET _NOW_MONTH=%A
SET _NOW_DAYOFMONTH=%B
SET _NOW_YEAR=%C)

REM: Note: converting hours to military time
FOR /f "usebackq tokens=1-3 delims=: " %A in (`time /t`) do (
SET _NOW_HOUR=%A
SET _NOW_MINUTE=%B
if "%C" equ "PM" (SET /A _NOW_HOUR = %_NOW_HOUR% + 12))

SET _NOW_DATETIME=%_NOW_YEAR%%_NOW_MONTH%%_NOW_DAYOFMONTH%%_NOW_HOUR%%_NOW_MINUTE%

REM Reference the timestamp as follows:
copy MyLogFile.txt \\server\share\MyLogFile_%_NOW_DATETIME%.TXT

EDIT: ok, I have already come to a better solution 🙂

for /F "tokens=1,2,3,4,5,6,7* delims=.:^/^ " %i in ("%date%.%time%") do (
SET _Timestamp=%l%j%k%m%n%o
REM pad hour if single digit
if "%m" neq "" if %m LSS 10 SET _Timestamp=%l%j%k0%m%n%o)

Using SYNONYMS in SQL Server for cross database dependencies

September 10, 2008

Whenever possible, I prefer to keep the dependencies for production code minimized and well organized.
I am very fond of using synonyms in SQL Server 2005 to manage all cross database references, this makes it easy both to enumerate and/or change all pointers to a given database.
Here is and example of creating two databases and defining some synonyms from one to the other.

create database orders
create database products
go

use products
go
create schema Catalog
create table Catalog.Offerings
(
ProductId int identity primary key
,ProductTitle nvarchar(128) not null
)
create table Catalog.NewOfferings
(
ProductId int references Catalog.Offerings(ProductId)
,AvailableDate datetime not null
)
create table Catalog.BestSellers
(
ProductId int references Catalog.Offerings(ProductId)
,PopularityRank smallint not null constraint bestSellers_uq_popularityRank unique
)
go

insert Catalog.offerings (productTitle) values ('product 1')
insert Catalog.offerings (productTitle) values ('product 2')
insert Catalog.offerings (productTitle) values ('new product 3')
insert Catalog.newofferings (productId, availableDate) values (3, dateadd(day, 1, getutcdate()))
insert Catalog.bestsellers (productId, popularityRank) values (2,1)
insert Catalog.bestsellers (productId, popularityRank) values (1,2)
go

use orders
go
create schema ProductsCatalog
go
create synonym ProductsCatalog.NewOfferings for Products.Catalog.NewOfferings
create synonym ProductsCatalog.Offerings for Products.Catalog.Offerings
go
select ofr.productId, ofr.productTitle, newofr.availableDate
from ProductsCatalog.NewOfferings as newofr
join ProductsCatalog.Offerings as ofr
on newofr.productId = ofr.productId
go

Now we will assume that a business requirement has appeared that necessitates updating all existing references to Products to now use the new Products09 database. We can script out all synonyms in the orders database with SQL management studio like:
Tasks=>Generate Scripts=>Choose ‘Orders’ db=>choose synonyms object types=>select all

Then we can update these synonym definitions to point to the new Products09 database in one transaction:
BEGIN TRAN
DROP SYNONYM [ProductsCatalog].[NewOfferings];
DROP SYNONYM [ProductsCatalog].[Offerings];

CREATE SYNONYM [ProductsCatalog].[NewOfferings] FOR [Products09].[Catalog].[NewOfferings];
CREATE SYNONYM [ProductsCatalog].[Offerings] FOR [Products09].[Catalog].[Offerings];
COMMIT

You can simulate moving to the new Products09 database by renaming the existing Products DB to Products09. Running the above select query will show that the synonyms are happily referring to the new database and returning the same results as before.