Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
Probably a simple question: I have some data that looks like this:

code:

MatchID         Time            Mod     KillerName      Weapon  VictimName

1215673061	1215672750	DEV	Entrails	DEVRL	0megaMan[APE]
1215673061	1215672838	DES	0megaMan[APE]	DES	Entrails
1215673061	1215672909	DEV	Entrails	DEVRL	0megaMan[APE]
1215673663	1215673144	DEV	0megaMan[APE]	DEVRL	Entrails
1215673663	1215673148	DEV	0megaMan[APE]	DEVRL	Entrails
1215673663	1215673162	DEV	0megaMan[APE]	DEVRL	Entrails
1215673663	1215673426	SUICIDE	Entrails	DEVRL	Entrails
1215673663	1215673623	FALLING	Entrails	GRUNT	Entrails
1215674570	1215673725	TRIGGER_HURT	Entrails	GRUNT	Entrails
1215731611	1215730773	DEV	Entrails	DEVCG	OutCa$t[APE]
1215731611	1215730797	DEV	Entrails	DEVCG	OutCa$t[APE]
1215731611	1215730861	DEV	OutCa$t[APE]	DEVRL	Entrails
1215732518	1215731666	DES	Entrails	DES	OutCa$t[APE]
1215732518	1215731675	DES	Entrails	DES	OutCa$t[APE]
1215732518	1215731919	DEV	Entrails	DEVRL	OutCa$t[APE]
1215756302	1215756117	TRIGGER_HURT	Entrails	GRUNT	Entrails
1215756302	1215756164	CHAINGUN	Entrails	GRUNT	0megaMan[APE]
1215757207	1215756362	DEV	Entrails	DEVCG	0megaMan[APE]


I am trying to count up the number of matches a player has been in. What I think I should do is:

- for every match ID, check to see if a PlayerName appears as either KillerName or VictimName
- If so, add 1 to MatchCount(PlayerName) then move on.
- perform this check for every player

But that seems to be really inefficient to scroll through every kill ever done on the server for each player.

How can I achieve the same result more elegantly?

Adbot
ADBOT LOVES YOU

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Jethro posted:

code:
select count(distinct matchID) matches,
       KillerName
from table
group by KillerName

That'll work for seeing if PlayerName has scored any kills in the match. How do I check to see if PlayerName has scored either a kill or a death?

How do I do the check on either KillerName or VictimName?

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
I have a SQL database with tables derived from CSV flat files an I need a better way of doing what I am doing.

Here's how it currently works:

- Data from a partner is exported from a database and dropped as a flat file into a directory called mmddyyyy on a sftp server
- Each day, a script pulls any new data files down from the server to a file server with a matching directory structure

Note: These files are technically "daily" files but some days it gets skipped due to database errors on their side and other days the job is rerun to backfill the gaps. The end result is that I have to parse all of the daily directories all the time to see if any files have appeared "in the past."

- the tables on the local database are cleared
- a DOS batch file then parses all of the daily directories and runs a BULK INSERT against each of the flat files, importing them into the database



The problem with this is that the database is hammered every day as it processes the thousands of flat files and imports millions of records. The batch job takes about two hours to run these days and everything else dies while it happens.

I drop the table and do a bulk insert because if I don't drop the table, I get duplicate records.


Surely there is a better way of doing this, but you know, rookie.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Triple Tech posted:

Sounds... financial. Am I close?

I'm no DBA, but have you considered turning of indexing and logging for that particular table/database?

Bingo. I work for an online broker.

Indexing and logging are already turned off.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Sub Par posted:

What is the nature of the duplicate records? Are the files you're importing cumulative (i.e. 9/1/2009 has all 8/31/2009 data plus all the stuff that happened in-between), or are they supposed to build on each other (and if so, why duplication)?

Can you do anything with a primary key and insert only unique rows rather than dropping the table every day and doing a bulk insert of millions of rows?

The file I am importing are cumulative in some cases and not in others. I have full control over my local database so I can do whatever I want to it to make it more efficient. Can you tell me more about how I'd use primary keys and how to insert only unique rows?

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Knightmare posted:

Does the flat file have unique identifiers or do you assign them a unique number when importing? If they come with a primary key and you use whatever they have it should be easier.

The flat file has a name with a date string in it. The contents of the files vary, but generally include a customer code or an account ID or a trasaction ID or some combination of all three. They do not come with a primary key since they are extracts from a report derived from multiple tables on the far end.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
How do I import the contents of a text file into a specific field of a table in MySQL?

I have a bunch of TXT files with descriptions that I need to import to the same fields of multiple records.

I can't figure out how to use mysqlimport this way. If mysqlimport is the way to do it.

(I can figure out how to write the batch script to import them all. I just can't figure out the syntax to get one file inserted into the field.)

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Begby posted:

Whats in the text files? It sounds like you are going to need some scripting with python or perl or something to do it since you will probably need to escape the data if it has special characters or formatting.

It's a long description of the record. Basically a really long text file that'll fit in a MEMO field in my database. At the moment it's just raw text, but I'll go back in and add some html tags to make it readable when it's called from a web page.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
I have a database where I need to collect data from a table for every Friday. If Friday doesn't exist (because of a vacation day) I need to pull the previous Thursday.

I have this very rudimentary SQL script to pull Friday data:

code:
use [tablename]
SET DATEFIRST 1

SELECT *
into EXT044weekly
FROM EXT044
WHERE DATEPART(dw,ProcessDate) = 5
How do I add a check to see if data exists for a specific week's Friday and if not, use Thursday?

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Sprawl posted:

And the easiest way i can think of would be selecting all for where processdate = 4 or 5 and then doing a Case statement where friday is 0 then thursday otherwise friday.

I get the first part, but can you show/tell me about the "then do a case statement" part?

I don't know how to parse the results of the first select statement through a case statement (probably because I've been thrown into the DBA role and don't know poo poo about SQL :argh: ).

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Sprawl posted:

Can you give me a Sample table layout with 5-10 rows of data?


Some sample data:

code:
AccountNumber,ValueA,ValueB,ProcessDate
10,55,1,1/1/2010
10,60,2,1/7/2010
10,60,2,1/8/2010
10,65,3,1/13/2010
10,65,3,1/14/2010
10,70,4,1/21/2010
10,70,4,1/22/2010
11,55,11,1/1/2010
11,60,22,1/7/2010
11,60,22,1/8/2010
11,65,33,1/13/2010
11,65,33,1/14/2010
11,70,44,1/21/2010
11,70,44,1/22/2010
11,55,111,1/1/2010
11,60,222,1/7/2010
11,60,222,1/8/2010
11,65,333,1/13/2010
11,65,333,1/14/2010
11,70,444,1/21/2010
11,70,444,1/22/2010
Primary Key is AccountNumber + ProcessDate.

What this represents is the last two days of the week data for three accounts, but for some reason there was a gap where a Friday was skipped (on 1/15/2010).

I need to create a statement that grabs all of the friday data, but if there is no data for a specific friday, use thursday instead (in this case use 1/14/2010 since 1/15/2010 doesn't exist)



While we are at it, and so I don't become a permanent beggar in this thread, can anyone suggest a good "Teach Yourself SQL in 21 Days" book?

Agrikk fucked around with this message at 01:11 on Feb 11, 2010

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Sprawl posted:


a golden treatise of knowledge


Dude, you are the man! Huge thanks for this.


Edit: So after messing around with this a bit: I realize that it groups everything by week number. Meaning I only get 52 entries for each account number. What if I need to collect data from multiple years?

Agrikk fucked around with this message at 22:57 on Feb 11, 2010

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
I have a SSIS package consisting of a simple Foreach Loop Container that pulls csv files and dumps them into a SQL table.

Is there some way that I can insert part of the filename into each record contained in that file into the database?


Basically if the file is called myfile20100401.csv and contains (ColumnA, ColumnB, ColumnC), I'd like the data in the MSSQL table to look like:

ColumnA | ColumnB | ColumnC | 20100401


For clarity, I am using the instructions on how to loop over files in a SSIS job from this web site.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
Here is what must be a rookie SQL question (because I am a SQL rookie):

I have three queries, each producing two columns of output:

code:
SELECT AccountNumber, 
	Count(CommissionEntered) AS CountOfOptionTrades
FROM EXT010
WHERE SecurityTypeCode = '5' Or SecurityTypeCode = '6'
GROUP BY AccountNumber


SELECT AccountNumber, 
	Count(CommissionEntered) AS CountOfEquityTrades
FROM EXT010
WHERE SecurityTypeCode != '5' AND SecurityTypeCode != '6'
GROUP BY AccountNumber


SELECT AccountNumber, 
	SUM(CommissionEntered) AS TotalCommissionEntered
FROM EXT010
GROUP BY AccountNumber
How do I merge these three queries to produce a single table with four columns:

AccountNumber, CountOfEquityTrades, CountOfOptionTrades, TotalCommissionEntered


I could dump them into temp tables and do a join across them, but that seems inefficient. There must be a way to do this as a single query, but I just don’t know how.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
It so happens that both worked.

That's pretty slick guys, thanks!

Agrikk fucked around with this message at 06:38 on Jan 22, 2011

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
Another SQL 2008 Query request:

I need some method to pull the most recent 3 complete months of data based on today's date.

So if you run the report any time in January, 2011, you will receive data for October, November and December 2010. And in February, you will receive data from November, December 2010 and January, 2011, etc.

Assime the table looks like:

AccountID (int)
Amount (money)
ProcessDate (datetime)


and the query will look something like:

code:
SELECT AccountID, COUNT(Amount), SUM(Amount)
FROM table

WHERE (ProcessDate falls in the most recent three complete months)

GROUP BY AccountID

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

bamhand posted:

WHERE month(process_date) = month(dateadd(-1, m, today())) or month(process_date) = month(dateadd(-2, m, today())) or month(process_date) = month(dateadd(-3, m, today()))

I forget if the order of the dateadd arguments are correct. Oh you need to match the years too, sorry, I forgot about that part.

today() isn't a valid fundction in MS-SQL and this was the correct syntax:

code:
WHERE month(ProcessDate) = month(DATEADD (mm, -5, getdate())) 
     or month(ProcessDate) = month(DATEADD (mm, -4, getdate())) 
     or month(ProcessDate) = month(DATEADD (mm, -3, getdate()))
But you were right, it returns those three months for all years.


Aredna posted:

If you need to avoid calculations on the ProcessDate field in order to take advantage of indexes you can also do it as:

code:
WHERE ProcessDate >= CAST(DATEADD(M,-3,DATEADD(D,1-DAY(GETDATE()),GETDATE())) as date)
  AND ProcessDate <  CAST(DATEADD(D,1-DAY(GETDATE()),GETDATE()) as date)

This did the trick. Thanks!

Agrikk fucked around with this message at 21:39 on Feb 1, 2011

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
Except that will return the last three months (November, December, January) for every year in the database.

Simpler, yes. Correct, no.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Scaramouche posted:

Just like to say I nominate Jethro for SQL Thread MVP.

He's added three pages to this thread with his responses. Jeeez... Thanks Jethro!


I have a request where, depending on one of the three values of AccountType, I need to place the AccountNumber into one of three columns in the query output. In addition, if a userID is linked to multiple types of account, the account number information should be all in a single row of the output.

code:
Given this:

userID    AccountType    AccountNumber
001       Individual     11
002       Individual     22
001       Roth IRA       33
003       Roth IRA       44
003       Joint          55
004       Joint          66
003       Individual     77
code:
Produce This:

UserID    IndividualAccount    JointAccount    RothIRAAccount
001       11                   -               33
002       22                   -               -
003       77                   55              44
004       -                    66              -

Agrikk fucked around with this message at 05:18 on Apr 28, 2011

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

camels posted:

What you're looking for is a PIVOT or crosstab query. How to do this depends on what RDBMS you are using. It's better to do this in a single query instead of using a temporary table.

Sprawl posted:

A Pivot table or Crosstab would be best but if your using mysql you dont have those.

I'm using SQL 2008 Enterprise. Can you explain further or point me to a good reference? I've never used PIVOT before...

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
Another SQL Server 2008 question:

I need to count the number of specific events that occur on a user account during a specific date range based on its creation date.

I'm confused on how to write a query that retreives UserID and ActiveDate, then counts occurences of EventX between ActiveDate and Activedate +42 days.

I keep bumping up against a "A SELECT statement that assigns a value to a variable must not be combined with data-retrieval operations." error and I don't understand how to dump a column record into a variable and then use that variable in a subquery of the original statement.


update:


I tried a different tack and tried to precalculate the range for each userid, returning a temp table #Dates that has:

UserID, ActiveDate, SixWeeksLater


Then running this query:

code:
select #Dates.AccountID,
	COUNT(1) as '42 Day Trade Count'
from EventData.dbo.EXT010 x10 

LEFT JOIN #Dates on #Dates.AccountID = x10.AccountNumber

where  #Dates.accountid = x10.Accountnumber 
and x10.EntryDate > #Dates.ActiveDate
and x10.EntryDate < #Dates.SixWeeksLater														 
group by #Dates.AccountID
but I get no results. What am I doing wrong here?

Agrikk fucked around with this message at 06:00 on May 9, 2011

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
I have a production database whose tables contents I need dumped nightly to a second corporate data warehouse. For various reasons, we cannot use the typical replication methods of snapshots, log shipping or replication so we have decided to use SSIS packages and T-SQL to do the job.

I've written the easy T-SQL that selects all new records in a specific table and inserts them into the corresponding table in the DW. However:


My problem is how do I select records that have been changed in the production database to do an update against the DW table?



This is a vague question, but I don't know enough about the process to know what information I should provide here.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
I actually said fuckit and am now looking at a product called SQL Data Compare Pro. It does data comparisons between two databases and can be called via command line (helpful for scheduled tasks...)

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
I have a bunch of CSV files (actually IIS logs processed by Log Parser 2.2 into CSV files) that I need to import into SQL 2008 and I'm having problems importing the date stamp.

Initially I am using the Import and Export Wizard and the timestamp is being detected as DT_DATE, which I think is incorrect.

The log file uses a date stamp format of
code:
6/7/2011  7:00:13 PM
What data type in SQL to I use to collect that data?


DATETIME throws an error that "potential data loss could occur".

Agrikk fucked around with this message at 17:44 on Jun 8, 2011

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Sub Par posted:


Edit to be helpful: maybe just import as text and then fix it?

Import/Export wizard was being stupid. I did just that and it worked just fine.

Thanks.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
I have a table of data that needs to be updated using the following rules and I don't have the foggiest notion of how to go about solving it:

code:
BEFORE:			
GUID	SessionID	MemberID    Timestamp
10	      11	0	      101
10	      11	0	      102
10	      11	0	      103
10	      22	0	      104
10	      33	0	      105
10	      44	1	      106
10	      55	0	      107
10	      66	0	      108
10	      77	0	      109
10	      77	2	      110
10	      77	2	      111
10	      88	0	      112
10	      88	0	      113
10	      99	0	      114
10	      99	1	      115
becomes

code:
AFTER:			
GUID	SessionID	MemberID    Timestamp
10	      11	1	      101
10	      11	1	      102
10	      11	1	      103
10	      22	1	      104
10	      33	1	      105
10	      44	1	      106
10	      55	1	      107
10	      66	1	      108
10	      77	2	      109
10	      77	2	      110
10	      77	2	      111
10	      88	2	      112
10	      88	2	      113
10	      99	1	      114
10	      99	1	      115
Basically the rules are:

1. When a MemberID is established for a GUID, update all of the prior MemberID that equal zero with that GUID (for all sessions).
2. If a different MemberID is then established for that GUID, use that MemberID for all subsequent sessions.
3. If a session is started with no memberID, but a memberID is subsequently established, update the entire session with the MemberID.
4. If a session has an existing memberID (from rule2) and a memberID is established during the session, update the whole session with the new MemberID.

Agrikk fucked around with this message at 19:29 on Jul 7, 2011

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
I have a table in SQL Server 2008 with dates broken up like so:

code:

AccountNumber YearMonth   Year    Month   AvgTotEq
00001         201008      2010     8       10
00001         201009      2010     9       124
00001         201010      2010     10      45
00001         201011      2010     11      45
00001         201012      2010     12      30
00001         201101      2011     1       30
00001         201102      2011     2       456
00001         201103      2011     3       456
00001         201104      2011     4       45
00001         201105      2011     5       56

All values are type INT, except TotalEquity which is decimal (20,4) and the data in the original table is not sorted. Primary key is composite on AcountNumber and YearMonth.

Among other things the table basically lists the average total equity for every account for every month it exists.

What I need to to is create a query based on this data that for every month gives me the three month rolling average of total equity (over it and the prior two months (the average of the averages, so to speak)).

Producing this:
code:
AccountNumber YearMonth AvgTotEq
00001           201008
00001           201009	
00001           201010	59.67
00001           201011	71.33
00001	        201012	40
00001	        201101	35
00001	        201102	172
00001	        201103	314
I am aware that the average of the average will produce slightly different results than a pure 90-day rolling average, but this is the data I have to work with.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Goat Bastard posted:

[code]

God, I love this thread. Thanks Goat!

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Jethro posted:

Use a foreach container to iterate through every file in the z:\ImportData directory, and make the file connection string an expression that uses the variables set by the iterator container.

Here's how to do it in SSIS (SQL Server Integration Services):

http://www.sqlis.com/post/Looping-over-files-with-the-Foreach-Loop.aspx


mindphlux posted:

Keep in mind I'm pretty new to working with MS-SQL - I saw that suggested a lot, but I don't see a tool installed on this server I'm working on that looks like any screenshots of like 'foreach containers'. What program do I use to do this? Business Intelligence Studio? Is there a reason a server would have SQL Server Enterprise 2008 installed, but not have that tool on it?

Unfortunately, it's not a server I'm directly in charge of, so I'll basically have to ask the admin to install it if that's what I'm missing. I'd like to avoid sounding completely daft if at all possible.


SSIS is part of the default installation setup of an SQL Server. You can also install SSIS on your desktop and build and run packages there that point to your database that lives on another server.


SSIS takes a lot of getting used to and it took me forever to get comfortable with it, but I have drunk the Kool-Aid and it is truly a wonderful, wonderful thing.

Basically you build a package in Visual Studio (also part of the default installation) and deploy it to your SSIS server. It's something that a quick google ("build SSIS package with visual studio") can show you.


By the way, thanks for posting a problem that I can actually contribute to the thread on!

Agrikk fucked around with this message at 18:42 on Oct 25, 2011

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
Think I need self joins here but I don't understand them:


Given a SQL Server 2008 table with data like so, partitioned using rownum on accountnumber ordered by yearmonth: (all data is int)

code:

rownum  accountnumber YearMonth qualified 
1          1           201105      0
2          1           201106      1
3          1           201107      0
4          1           201108      1
5          1           201109      1
1          2           201106      0
2          2           201107      1
3          2           201108      0
4          2           201109      1
1          3           201105      0
2          3           201106      1
3          3           201107      1
4          3           201108      1
5          3           201109      0
1          4           201107      0
2          4           201108      0
3          4           201109      0

If an account is qualified in max(yearmonth), what is the first month it qualified ever and if it is not qualified in max(yearmonth), what is the most recent month it was qualified (if ever?)


Output should look like this:

code:

accountnumber qualifiedthismonth firstqualified lastqualified
     1               1              201108          0
     2               1              201109          0
     3               0              0               201108
     4               0              0               0
     
     

Agrikk fucked around with this message at 20:23 on Oct 27, 2011

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

door.jar posted:

I've kind of fallen into some tasks at work that require a lot more SQL than I ever wanted to learn. I mean I knew how to do basic stuff but now my job would be a lot easier if I knew about stored procedures, performance improvements, triggers, creating views etc. So far I've been getting by with google but sometimes not knowing quite what to look for or how to judge terrible advice is really frustrating.

What I'm basically looking for is a suggestion for a reference text or similar that I can buy to give me a go to source of good advice. MSSQL by preference. I used MSDN constantly but I'd like a book if there's a good one around (everyone respects people with books on their desk).


Welcome to the thread door.jar! I feel your pain as I've fallen into a DBA role building and maintaining a data warehouse even though I have exactly zero DBA experience (but twenty years as a high-availability architect. go figure).

This thread is invaluable and people are very good at helping solve one-off issues.

For long-term training and resources, I recommend the following books:

Microsoft SQL Server 2008 Management and Administration
Microsoft SQL Server 2008 T-SQL Fundamentals
Inside Microsoft SQL Server 2008: T-SQL Programming

These books have helped me out quite a bit, but they're for SQL Server 2008 (not -R2), but you didn't mention what version you were learning.

Good luck!

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
One would think using row_number() to partition the data and then selecting where rows > 10 and rows < 21 would work. :)

I've what appears to be a simple problem but I can't get my head around the solution:

I'm trying to fetch all database sizes to a table and I mean actual allocated space in all data files, not the database_size itself (database_size - unallocated space)

I'm using
code:
select *,GETDATE() as timestamp from openquery([i]myservername[/i], 'sp_spaceused')
to get the two pieces, but I'd like to iterate over all the databases on the server


and
code:
EXECUTE sp_msforeachdb 'select *,GETDATE() from openquery([i]myservername[/i], 'sp_spaceused')'
doesn't work (probably because I'm runing a stored procedure from within a stored procedure but I'm new to SP's so I'm not really sure...

Alternately using this:
code:
EXEC sp_msforeachdb @command1="use ? exec sp_spaceused"
Produces the information, but not formatted properly to insert into a table (there are two rows of data produced for each database instead of one)


Any suggestions on how to do this?

Agrikk fucked around with this message at 19:21 on Jan 6, 2012

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
This is a rediculously retarded question and I should be able to do this by now, but I can't get my head around it:


Given a data field [YearMonth] [int] with format yyyymm

How do I get the last day of that month with a format of yyyy-mm-dd?

Edit: in MS-SQL. Sorry.

Agrikk fucked around with this message at 20:50 on Apr 2, 2012

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

JeansW posted:

You can just convert the Year Month to a date time (with a day of 1) and then add a month to it (to get the 1st day of the next month) and then minus a day.

So it'd be:
code:
SELECT DATEADD(D, -1, DATEADD(M, 1, CONVERT(DATETIME, CAST(YearMonth AS VARCHAR(8)) + '01', 112)))

Oh. Of course. Duh.


Thanks!

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
I've been trying to figure out how to perform a restore via command line and I'm stumped. This is MS SQL Server 2008 R2.


In production the database has a file layout like this:

S:\DATA\database.mdf
T:\LOGS\database.ldf

In the lab I want to restore it like:

S:\DATA\database1.mdf
T:\DATA\database2.ndf
U:\DATA\database3.ndf
V:\LOGS\database.ldf


If I perform the regular method of restore, the restore will create a single data file and a single log file but allow me to name them anything and put them anywhere, but I don't want that.

I am trying to perform a restore of a production full database backup to a lab and the trick is that I want the restore to split the data files into three files on three different volumes and don't know how.

Any suggestions?

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
Yeah,that's what I thought.

I was hoping I could avoid a intermediate step, but it looks like I'll have to build a staging area and restore the database there and then create an ssis package to move data table by table.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
I have two tables (source: TableA and destination: TableB) that I'd like to copy only the records added to one table to the other table. Both tables have ([Id] [int] IDENTITY(1,1)) for a primary key.

I tried doing:

code:
SET IDENTITY_INSERT TableB ON

insert into TableB (
	Id,
	Record1,
	Record2
)
select
	Id,
	Record1,
	Record2
from TableA
where TableA.ID > (select MAX(Id) from TableB)

SET IDENTITY_INSERT TableB OFF
Zero records are transferred. What am I missing here?

Edit: nevermind, I figured it out. The first time I ran it, the destination table returned NULL for max(Id). I inserted an ISNULL statement in the criteria and it worked fine.

Agrikk fucked around with this message at 21:57 on May 8, 2013

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
I've completely spaced out on how to subtract one row from another.

If I have unordered data:

code:
units	stamp
18892	2013.06.13.AM.11.48
18892	2013.06.13.PM.01.48
18891	2013.06.12.PM.10.48
18887	2013.06.11.PM.08.47
18887	2013.06.11.PM.10.48
18891	2013.06.13.AM.05.48
18888	2013.06.12.AM.02.48
With the timestamped samples taken roughly every hour, how do I retrieve the hourly increase/decrease?

This is in SQL Server 2008 R2.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

PhantomOfTheCopier posted:

Sadly, this seems to suggest that LEAD/LAG were not made available until 2012, and comments note that you'll be stuck doing a join on rownumber-1. As I was searching for a way to do this, I came up with something like the following in PostgreSQL using window functions:
code:
SELECT stamp,units,units-LAG(units) OVER (ORDER BY stamp) AS diff FROM table ORDER BY stamp DESC;
Is it too much to hope there's a WITH clause so something like this will work?
code:
WITH t2 AS (
  SELECT t.*,ROW_NUMBER() OVER (ORDER BY stamp DESC) AS rnum FROM table t
  )
SELECT
  t1.units,t1.stamp,t1.rnum,t2.rnum,t1.units-t2.units AS diff
FROM t2 t1
  LEFT OUTER JOIN t2 ON (t1.rnum+1=t2.rnum)
ORDER BY stamp DESC;
(Yeah, the costs for the window approach are excessively below those for the join and, no, I have no way to test sanity of any of this against squirrel server. Maybe this is not the best approach to solving your problem?)

Thanks for the reply.

I can't believe that the join to rownumber-1 technique is still the way to do this. I dunno but it seems that google is loaded with people having this problem and it would be a relatively minor thing to create a command that would do this.

I'll have a look at the with clause approach as well to see which is more efficient in my setup.

Agrikk fucked around with this message at 18:22 on Jun 14, 2013

Adbot
ADBOT LOVES YOU

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
I am parsing kill logs from a game server and trying to determine how many matches a playername has participated in (in SQL Server 2012).

I have data like:

code:
Table KillInfo

MatchID  Killer  Victim
1         Foo     Foo
1         Foo     Bar
1         Bar     Foo
1         Bar     Buzz
1         Bar     Foo
1         Foo     Buzz
2         Foo     Foo
2         Foo     Bar
2         Bar     Foo
2         Bar     Buzz
2         Bar     Foo
2         Foo     Buzz
3         Bar     Foo
3         Foo     Bar
3         Foo     Foo

What's the best way to determine the number of matches in which a playername appears (as either killer or victim)?

I have a kludged together method where I select distinct MatchID, Killer and then union it with MatchID, Victim, then select distinct of that to purge duplicate records, then do a Name, count(1) by Name to get the total.

This seems really kludgy and inefficient.

Is there a better way to do this?


Agrikk fucked around with this message at 20:10 on Aug 4, 2013

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply