|
Scaramouche posted:
I can do this, but it doesn't recover the X, Y, and Z tied to the row with the max T value.
|
# ? Feb 10, 2011 02:53 |
|
|
# ? Jun 7, 2024 16:09 |
|
Maybe this? You can ORDER BY something else (like A, B or C) first and it should still work.code:
butt dickus fucked around with this message at 03:22 on Feb 10, 2011 |
# ? Feb 10, 2011 02:57 |
|
Doctor rear end in a top hat posted:Maybe this? You can ORDER BY something else (like A, B or C) first and it should still work. That also doesn't pull X, Y, and Z, and that also doesn't filter, does it? Before I was getting: A1 B1 C1 (T1) X Y Z A1 B1 C1 (T2) X Y Z A2 B2 C2 (T3) X Y Z T is in parens because I'm not selecting it in the production query, though that's not a big deal. Instead, I want to get: A1 B1 C1 (T1) X Y Z A2 B2 C2 (T3) X Y Z
|
# ? Feb 10, 2011 03:08 |
|
code:
Right now the clause is: code:
code:
EDIT: Never mind, it was a problem with my data. Quebec Bagnet fucked around with this message at 06:47 on Feb 10, 2011 |
# ? Feb 10, 2011 06:42 |
|
kimbo305 posted:That also doesn't pull X, Y, and Z, and that also doesn't filter, does it? code:
You'll have to put your join in there still, but I'm pretty sure this will filter t1 the way you want.
|
# ? Feb 10, 2011 06:45 |
|
Derpes Simplex posted:
I think you also have your null check inverted don't you? Checking for x IS NOT NULL AND [some condition] will give you the equivalent of "x != null ? [some condition] : false" I think you want code:
|
# ? Feb 10, 2011 06:54 |
|
kimbo305 posted:[Primary Key Problems] code:
|
# ? Feb 10, 2011 16:08 |
|
I have a mostly pointless blog, that receives a lot of comment spam for some reason. Looking at the comment moderation list, most of the 'authors' names have 3 digits in them, e.g. Cassie001, piuyo433, gooloo461, etc. Is there a way to select all records containing at least 3 digits in the 'name' column using mysql?
|
# ? Feb 11, 2011 08:38 |
|
beuges posted:Is there a way to select all records containing at least 3 digits in the 'name' column using mysql?
|
# ? Feb 11, 2011 12:37 |
|
RoadCrewWorker posted:http://dev.mysql.com/doc/refman/5.0/en/regexp.html looks like it has what you want. Something like REGEXP '[[:digit:]]{3}' or variant thereof might work, although you probably can and should fine-tune it depending on the spam names. Awesome... thanks!
|
# ? Feb 11, 2011 13:02 |
|
I don't know about MySQL, but on SQL Server you can even just do this with LIKE: WHERE field LIKE '%[0-9][0-9][0-9]%'
|
# ? Feb 11, 2011 16:04 |
|
Goat Bastard posted:
Jethro posted:The solution to this sort of problem is always analytic functions, assuming your DBMS supports them, which Vertica appears to. Thanks, guys, these both work (along with the bodge I did at the end of last page). I need to find some quiet time to benchmark to see which one is faster. I imagine the general analytics solution compiles into a plan that's similar to the max() version?
|
# ? Feb 11, 2011 23:06 |
|
Here's a stupid question. We're looking at migrating from MS Access 2003 to SQL Server 2008 (thank christ), and one of the things I think we'll be losing is the "Switchboard" functionality Access provides to quickly make changes to specific tables. We are using it to configure the app for the client's setup, something that would normally go in a .properties or .xml file except it's something we don't want end users to have (easy) access to. I didn't see anything on the first two pages of a number of searches. Does this concept not exist in SQL Server? (I wouldn't expect it to, but who knows.) Meaning I'd have to build a small app to make those quick changes in the same way? --- Here's another stupid question. For some reason, the way the database is laid out in Access is there are several mdb files (about 10), each of which containing one or more tables. Don't ask why. Importing individual mdb files into SQL Server is a breeze, but I'm looking to automate the process (install SQL Server, create a database, point to the old directory containing a bunch of mdb files and Go). I'm still new to SQL Server (my experience is mostly postgres and mysql), is there a way to batch-import a number of mdb files? Or I should again be writing a small import app? --- Any general advice on Access 2003 -> SQL Server 2008? It's for a desktop app written in C++, I'm starting to look into what drivers/libraries/code will need to change in the program. Anything I should definitely be avoiding, or not avoiding? epswing fucked around with this message at 16:02 on Feb 14, 2011 |
# ? Feb 14, 2011 15:54 |
|
Why not just move the tables from Access to SQL Server and use the 'linked tables' functionality? That way, you keep everything in Access that the end user is used to seeing, and won't lose your switchboard app, but your data still lives in SQL Server. For the many-to-one import, you're best off doing them each individually. Yes, SQL Server has a way to do it, but it's through SQL Server Integration Services, and isn't very friendly to even SQL Server DBAs. As someone new to the world of MSSQL, it might cause you to jump off a bridge, or at least never want to use SQL Server again.
|
# ? Feb 14, 2011 16:45 |
|
slartibartfast posted:Why not just move the tables from Access to SQL Server and use the 'linked tables' functionality? That way, you keep everything in Access that the end user is used to seeing, and won't lose your switchboard app, but your data still lives in SQL Server. Interesting. I'll have to look into this. If there's a change to the schema, would I make the change in Access, or SQL Server? Am I wrong in thinking it would be nice (sane) to obliterate all ties to Access and just deal with one database? What's the motivation for the hybrid (other than the switchboard)? slartibartfast posted:For the many-to-one import, you're best off doing them each individually. Yes, SQL Server has a way to do it, but it's through SQL Server Integration Services, and isn't very friendly to even SQL Server DBAs. As someone new to the world of MSSQL, it might cause you to jump off a bridge, or at least never want to use SQL Server again. Sorry I wasn't clear, the database/app is in production for hundreds of clients, and we have a small service department. Upgrading a client needs to be quick and painless, so importing 10+ mdb files one by one, for each client, isn't really feasible. The migration app I envision is really just "create database in sql server, create all necessary tables in that database, then for each mdb file, for each table, select * and insert into identically named empty table in sql server." There are no foreign keys in any of the access databases () so there won't be any referential integrity problems. I was hoping there was some batch-import-wizard-like functionality I wasn't seeing in SMSS. epswing fucked around with this message at 17:03 on Feb 14, 2011 |
# ? Feb 14, 2011 16:58 |
|
epswing posted:The migration app I envision is really just "create database in sql server, create all necessary tables in that database, then for each mdb file, for each table, select * and insert into identically named empty table in sql server." There are no foreign keys in any of the access databases () so there won't be any referential integrity problems. I was hoping there was some batch-import-wizard-like functionality I wasn't seeing in SMSS. I agree with slartibartfast that SSIS can be unfriendly until you know how to use it, but just making packages based off of the wizard is usually pretty safe.
|
# ? Feb 15, 2011 22:06 |
|
Melraidin posted:Salt and hash the passwords. This will prevent people with access to the DB from being able to determine the original password in any reasonable time. If you only hash the passwords then rainbow tables could be used to find values that would hash to the same password. Don't do this. It was the common wisdom for some time, but has since been shown to be a bad idea. See this link for the details, but the short version is that common hashing functions like MD5 and SHA1 were designed for speed. This is good when using them to verify a lot of data, but bad when using them for passwords. Salts can help dodge rainbow tables when done right, but using GPU acceleration a modern computer can brute-force even salted passwords fairly rapidly. See here for a real world example. Basically use a hashing algorithm intended for passwords, not speed. Speedy hashes are bad for security.
|
# ? Feb 16, 2011 03:09 |
|
Softball question for you SQL experts. I've got two tables and I want to query the database and get information from both of them joined into a single result. The first table is tile locations with x, y and a bunch of other data. The second table is units which occupy an individual tile location. A tile can be unused of course and when that's the case I want the result to say NULL or 0, or something I can use to tell there's no unit in that location. code:
|
# ? Feb 16, 2011 19:39 |
|
select L.id, L.PosX, L.PosY, U.Id as UnitId from location L left join unit U on L.Id = U.LocId
|
# ? Feb 16, 2011 19:42 |
|
I wonder if someone could give me some pointers regarding a table structure for a hierarchical team setup. I am developing a workforce management system for about 2000 employees (using Access). I have the employee side of things set up fine - one employee record to many employee detail records, where employee detail is things like hours, shift pattern, basically anything that changes over time so we can have a historical picture of their employment. Currently these detail records also store the team the employee is in as we want to know that historically too. Where I am struggling is the team structure and the best way to store and manage it. There are about three levels of management and at each level we need to be able to pull results for everything under each span of command. Level one is fine, as there is only one team underneath each low-level manager. At level two, each manager manages around 10 lower-level managers so we need to be able to query information about those 10 lower-level managers and also the members of staff they manage. So on and so forth. I'm not sure if I am Googling poo poo terms or what, but I cannot find practicle examples of how best to do this. Has anyone done anything similar and would you care to share how you tackled this? Thanks.
|
# ? Feb 16, 2011 23:28 |
|
G-Dub posted:Where I am struggling is the team structure and the best way to store and manage it. There are about three levels of management and at each level we need to be able to pull results for everything under each span of command. Level one is fine, as there is only one team underneath each low-level manager. At level two, each manager manages around 10 lower-level managers so we need to be able to query information about those 10 lower-level managers and also the members of staff they manage. So on and so forth. Multiple tables with foreign keys table level1 id manager table level2 id manager level1id table level3 id manager level2id etc... Something like that?
|
# ? Feb 16, 2011 23:43 |
|
Doctor rear end in a top hat posted:Multiple tables with foreign keys Is that really a good idea? What happens if the company gets larger and they start to have a fourth tier of management.
|
# ? Feb 17, 2011 00:58 |
|
Someone somewhere recently brought up a technique to represent a parent/child/tree relationship in a relational database without happy things like WITH RECURSIVE. If you have that in your database engine of choice, use it, it's a no-brainer. I'm still trying to figure out why I didn't bookmark the article I'm thinking about. Going to have to go diving...
|
# ? Feb 17, 2011 01:13 |
|
McGlockenshire posted:Someone somewhere recently brought up a technique to represent a parent/child/tree relationship in a relational database without happy things like WITH RECURSIVE. If you have that in your database engine of choice, use it, it's a no-brainer. You might be thinking of the Nested Set Model. MySQL has a page on it (that isn't loading right now), but it looks like the Wikipedia page has enough to explain it. The technique relies on ordering your data in the table in a certain way. http://en.wikipedia.org/wiki/Nested_set_model
|
# ? Feb 17, 2011 01:46 |
|
Yeah, that looks similar to if not identical to what I was thinking of. Bookmarking it this time around, thanks!
|
# ? Feb 17, 2011 04:49 |
|
epswing posted:select L.id, L.PosX, L.PosY, U.Id as UnitId from location L left join unit U on L.Id = U.LocId This worked great. Thanks!
|
# ? Feb 17, 2011 06:46 |
|
McGlockenshire posted:Yeah, that looks similar to if not identical to what I was thinking of. Bookmarking it this time around, thanks! Check out this guy for folks using SQL Server 2008: http://msdn.microsoft.com/en-us/library/ms186243.aspx Edit: It has the example of a multi-level organization hierarchy with managers and stuff
|
# ? Feb 17, 2011 15:55 |
|
code:
Is there any way to do this without putting the results into a table variable, then insert all of the values in Column1, insert all the values in Column2, etc.? Edit: I found the magic of CROSS APPLY and was able to get it into a row for each integer at a much earlier point. Thanks! POKEMAN SAM fucked around with this message at 19:02 on Feb 17, 2011 |
# ? Feb 17, 2011 18:32 |
|
Another question. I've got a table that has many rows per UserId, and I want to grab just one of them, since the field I want is actually data duplicated on all of the rows. The only thing I could think of was doing: SELECT UserId, MIN(FieldIWant) FROM Table GROUP BY UserId but this is slow because it has to look at all of the rows and compare the FieldIWant value. I just want one of them, and I don't care which one!
|
# ? Feb 17, 2011 19:44 |
|
Ugg boots posted:Another question. I've got a table that has many rows per UserId, and I want to grab just one of them, since the field I want is actually data duplicated on all of the rows. The only thing I could think of was doing: You could just take out the MIN and it will still give you just one row per UserId. This might be faster: SELECT DISTINCT UserId, FieldIWant FROM Table
|
# ? Feb 17, 2011 19:48 |
|
Doctor rear end in a top hat posted:You could just take out the MIN and it will still give you just one row per UserId. This totally helped, thanks!
|
# ? Feb 17, 2011 20:05 |
|
I've got an ODBC-compliant database on some server, the contents of which I'd like to copy to a mysql DB on another server. Can I just start up mysql, connect to the first server using the DNS and select all to get all the data? The mysql documentation was more concerned with connecting from access to mysql and they suggested all sorts of tools to migrate a DB to mysql, but they had no suggestions for keeping two separate DBs.
|
# ? Feb 18, 2011 05:00 |
|
It's been a while since I've done database work, so I have a pretty basic question. I have a SQLite3 database with a table Foo that has many Bar. I want, in return, to create an array of structures based on Foo that contains a sub-array of all of its Bars. What's the most efficient way to do this? I know this is a n+1 problem, but what's the best way around it? Google just gives me answers based on various ORMs, which I am not using. My current idea is to make the Foo array a hash instead, using the Foo database key as the hash key. Then do a "SELECT * FROM Foo INNER JOIN Bar ON Foo.key = Bar.fookey". Then, as I iterate over the results, do a hash lookup of the Foo.key, and add the current row's results to that Foo struct's Bar array. If no results are found in the hash, create a new Foo struct and insert it into the hash. However, I'm wondering if doing n hash lookups is that much cheaper than doing n database SELECTs. My other idea would by to add an "ORDER BY Foo.key" and then just iterate over the results while keeping an index into my array. If array[index].fookey = result.fookey, then add the current result to the current Foo struct's Bar array. If not, create a new Foo struct, add it to the end of the array, insert the current result into that Bar array, then advance the index to point to this new Foo struct. Faster, but at the expense of readability. However, I know this is a common problem, but my Google-fu is weak. What is the best practice for solving it?
|
# ? Feb 18, 2011 12:26 |
|
Here's one that seems simple on the surface but is breaking my brain: Given a table full of records with DATETIME columns for start time and end time, I'm trying to produce an output table with the number of active records in given intervals, including those which both started and ended within the interval. Right now for any arbitrary interval the following query seems to work: code:
edit: welp, got the first option working, no idea how bad this is though code:
wolrah fucked around with this message at 01:26 on Feb 21, 2011 |
# ? Feb 21, 2011 00:43 |
|
Disclosure: Not an SQL expert. Also apologize for the formatting looking like poo poo. Proc SQL in SAS, sorted by Rec. So, a large table with data similar to below. My challenge is summing up the Amt column for particular ranges of records, according to the following rules: 1. Sum records along the range where encountering the first 22 Code and stopping at the subsequent 12 Code. (so in the table below, the first 5 records would have the amount summed). 2. Sum records starting at the next encounter with a 22 code until encountering another 12 Code, or reaching the end of the group defined by Rec. (so in the table below, it would be the summation of records 8 through 14) 3. Any records that do not fall between a 22 code and a 12 code, such as record 6, should not be summed. Rec Code Amt |a | 22| 120 |a | 22| 90 |a | --| 80 |a | --| -- |a | 12| 500 |a | --| 300 |a | --| -- |a | 22| 300 |a | 22| 60 |a | --| 15 |b | 22| 600 |b | 22| 80 |b | --| -- |b | 12| 20 |c | 22| 400 |c | 12| 30 |c | --| -- |c | --| 400 |c | --| -- |c | 22| 90 |c | --| -- |c | 12| -- etc.. Rip Testes fucked around with this message at 01:12 on Feb 21, 2011 |
# ? Feb 21, 2011 00:50 |
|
What defines what order the rows are in? Physical storage order, or is there a primary key not shown? What dialect of SQL are you using?
|
# ? Feb 21, 2011 01:06 |
|
Hammerite posted:What defines what order the rows are in? Physical storage order, or is there a primary key not shown? What dialect of SQL are you using? If you're reference the post just above, SAS's implementation of SQL, whatever that is analogous to. That's the set as it now stands and I presume I'd have to break it into multiple tables and create additional variables.
|
# ? Feb 21, 2011 01:23 |
|
I think you're going to need a variable that you update as it reads each row. Can you just read in the data via SQL and then do it in SAS? SAS is like a million times more flexible.
|
# ? Feb 22, 2011 16:03 |
|
At work, we run SQL Server 2008 R2 and have a schedule of weekly full backups and daily transaction log backups. On a normal day, our transaction log backup is 300 to 700 MB, but this morning's was 2.7 GB. As far as we know, there wasn't anything special done to the database yesterday. Is there anything I can do to investigate the transaction log backup or anything else to see what might have caused that?
|
# ? Feb 22, 2011 20:04 |
|
|
# ? Jun 7, 2024 16:09 |
|
I need help understanding a particularly long running query I'm importing data from another database into a table code:
I have a View on this table like so code:
Whenever I try a LEFT JOIN on this view on EmployeeNumber the query time is ridiculous. If I limit the results to the top 1000 I'm waiting for nearly a minute. What I don't get is why, and I have no idea how to improve the performance - it seems as simple as can be. I should add that there are a LOT of null values here Nurbs fucked around with this message at 05:38 on Feb 23, 2011 |
# ? Feb 23, 2011 05:36 |