Ahh, now we get down to it. When you say date/time is unique in all tables, my knee-jerk reaction is "Really? How are you assuring that" But know I am lead to believe that the source of the data is a data source that supplies more date/time precision than SQL Server records natively.
Key question: To what precision are you maintaining your date/time information?
Key question: How are you assuring that all date/time fields are unique? This is not a common guarantee in SQL Server.
|||The data comes from several "clean" sources. The datetime field is set as a primary key. During import, checks for duplicate datetime values are made. After import, the data is cleaned again.
Most of the data is in one minute intervals.
When you say uniqueness is not a common guarantee, are you saying that setting a field to primary key or unique does not work properly in SQL Server?
|||If your granularity is finer than millisecond granularity then you will need an alternative to native datetime datatype. The native datetime datatype only tracks to 1/300th of a second.|||Most of the data is at one minute or higher (10 minute, hourly, etc.) levels. The possibility exists of sampling data as fine as one second intervals. Nothing will be done at the millisecond level.|||I think I would stick with native datetime; can we get additional opinions please?|||If I recall correctly, internally SQL Server stores datetime values as a decimal number comprised of a (left side) four byte integer representing the number of days since Day 0 (Jan 01, 1900) , separating decimal, and a (right side) four byte integer representing the number of milliseconds since midnight.
So while not stored as an Integer, it is stored in the 'next best thing'.
|||Yes, however, that is NOT to the nearest millisecond but to the nearest 1/300 of a second; it is a legacy from Sybase.sql
No comments:
Post a Comment