|
Of course, thanks for the Java syntax!
|
# ? Oct 28, 2012 22:36 |
|
|
# ? May 30, 2024 16:50 |
|
How would I do a INSERT-SELECTT with some 'hardcoded' values? ie. SQL code:
|
# ? Nov 9, 2012 15:31 |
|
Sab669 posted:How would I do a INSERT-SELECTT with some 'hardcoded' values? SQL code:
|
# ? Nov 9, 2012 15:39 |
|
So simple I always over complicate stuff
|
# ? Nov 9, 2012 15:46 |
|
Structure and efficiency question: Outer Join vs. Union with a NOT IN (for SQL Server 2008) Simplified from current situation for clarity:code:
A user can belong to several groups. All training assigned to the group is required to be taken by the users in that group at least once, but possibly annually. Therefore a user may have several UserTraining records for a single training, or none at all if the user has never taken the training. Users also have the option of taking training that is not required by the group. For any User + Training combination, there is only 1 record marked Active. The question is what is the most efficient way to build the complete list of both started/completed training and training that is required but not started or completed? I've got two solutions given @UserUID: code:
code:
Mindless fucked around with this message at 21:52 on Nov 12, 2012 |
# ? Nov 12, 2012 21:47 |
|
You probably aren't going to find out until you insert a million rows and ask whatever analogue SQL Server has for EXPLAIN. Query plans can change drastically based on the amount of data present in all of the other database systems, and SQL Server is surely no different.
|
# ? Nov 12, 2012 22:52 |
|
This isn't necessarily a SQL question, but what is the differences between why one would use Windows Authentication or SQL Server authentication for SQL Server Management Studio? In school we always did SQL Authentication, but this customer I'm working with uses Windows Authentication. Wasn't sure if there's a difference / what might persuade a user to go one way instead of the other.
|
# ? Nov 13, 2012 15:01 |
|
Sab669 posted:This isn't necessarily a SQL question, but what is the differences between why one would use Windows Authentication or SQL Server authentication for SQL Server Management Studio? In school we always did SQL Authentication, but this customer I'm working with uses Windows Authentication. Wasn't sure if there's a difference / what might persuade a user to go one way instead of the other. Windows authentication provides a bunch of advantages. For one thing, you're not transmitting or storing a password of any kind - you're leaving it up to Active Directory. That's far more secure, especially when you think about how SQL connection strings are usually stored. What's better to keep stored around your production web servers, "Source=Blah;Security=Integrated" or "Source=Blah;Uid=sa;Password=who-cares-if-it's-hard-it's-plain-text". It's also a lot easier as a developer, since you can just log in as the Windows user you're current using. (think single sign-on)
|
# ? Nov 13, 2012 15:24 |
|
Here's a really really dumb question. So, Oracle: I've got a table with a dozen or so fields which need to be concatenated together, inline, in a select statement, separated by newlines. code:
|
# ? Nov 15, 2012 07:17 |
|
bucketmouse posted:Here's a really really dumb question. NULL || 'any string' returns NULL. You can use COALESCE to convert your NULL column values to non-null values: code:
|
# ? Nov 15, 2012 08:49 |
|
camels posted:NULL || 'any string' returns NULL. You can use COALESCE to convert your NULL column values to non-null values: Not in Oracle. || is the concatenation operator, and NULL || 'any string' returns the string. Equally unintuitively, Oracle considers the empty string is to be null, so those COALESCE calls are no-ops. code:
code:
|
# ? Nov 15, 2012 09:52 |
|
bucketmouse posted:Here's a really really dumb question. Take a look at NVL2().
|
# ? Nov 15, 2012 11:14 |
|
Goat Bastard posted:Not in Oracle. || is the concatenation operator, and NULL || 'any string' returns the string.
|
# ? Nov 15, 2012 15:54 |
|
Yeah I figured I'd have to do something like that. I've already had the fun experience of reimplementing AUTO_INCREMENT with sequences. Thanks.Golbez posted:How do people come up with these things Oracle is so helpful, you see: code:
|
# ? Nov 15, 2012 17:04 |
|
Golbez posted:How do people come up with these things I think it's actually defined as that in the ANSI SQL standard, which probably pre-dates '+' becoming the de facto standard.
|
# ? Nov 16, 2012 00:53 |
|
Not quite a code question but here goes. I have a SQL server 2k5 app that is currently distributed over 5 nodes by application partition (think Asia on n1, Europe n2, etc). This leads to inefficiencies and confusion. Where do you put Russian records? Etc. We are looking at consolidation with 2k8r2 or even 12 however as we would be multiplying various tables by 5 and demand by 5, I'm concerned about performance. Short of buying a 4x8 core intel box and shoving 1TB of ram and some crazy solid state storage, are there any load distribution options? Oracle uses RAC but it looks as if ms doesn't have an equivalent technology. How do SQL server shops handle large scale databases that out strip current hardware resources? Any resources or key words to improve my googling would be appreciated.
|
# ? Nov 25, 2012 23:34 |
|
KennyG posted:Not quite a code question but here goes. I have a SQL server 2k5 app that is currently distributed over 5 nodes by application partition (think Asia on n1, Europe n2, etc). This leads to inefficiencies and confusion. Where do you put Russian records? Etc. http://www.sql-server-performance.com is probably a good place to ask about this and to do research.
|
# ? Nov 26, 2012 03:58 |
|
e; Eh, fixed it
Sab669 fucked around with this message at 21:14 on Nov 26, 2012 |
# ? Nov 26, 2012 20:59 |
|
KennyG posted:Not quite a code question but here goes. I have a SQL server 2k5 app that is currently distributed over 5 nodes by application partition (think Asia on n1, Europe n2, etc). This leads to inefficiencies and confusion. Where do you put Russian records? Etc. Some keywords that may or may not fit, but might lead you in a direction: Eventual Consistency, AlwaysOn Availability Groups, (you get perfectly-replicated readable secondaries) Peer to peer replication, merge replication, any kind of replication really, master data services, etc... I don't even know what that last one is, but I've heard it muttered a couple of times. I don't know if any of these apply at all, but we've got a pretty twisted database infrastructure here. (even though all of our instances are in the same city...) We actually use transactional replication to feed databases on a web server(s) while the web server(s) itself sends all writes (and reads performed soon after a write) back to the "master." This is a terrible pain and we're looking to consolidate into one data center, but what you're asking for can probably be done as long as your business is willing to live with a little inconsistency or complexity.
|
# ? Nov 27, 2012 23:32 |
|
I'm working on a pet project disk spanning pseudo-filesystem (think Greyhole or Windows Home Server) for which I'm currently using SQLite to keep track of what files are on what actual disks. The issue I'm running in to is that my main goal is to tolerate disk failure with as little impact as possible. The file-handling side of this is pretty easy, just don't show any files that only exist on a disk that's unavailable, my problem is on the database side. I don't want to depend on any data outside of the disks in the pool, so I currently have a "master" disk which contains the live SQLite DB and the service simply copies this to the other drives when the pseudo-FS is unmounted. Obviously this has major flaws if it's not unmounted cleanly, particularly if the master disk was to fail while mounted. It doesn't seem like there's any way to simultaneously write to multiple database files in SQLite, so is there another similar self-contained database which might be better suited to my use (I don't need much beyond basic SELECTs with a bit of JOIN going on)? Is there some third-party SQLite replication system I've been unable to find? Or is the overlap between those who need self-contained databases and those who need replication so small that there's just nothing there?
|
# ? Nov 28, 2012 00:24 |
|
I have a stupid question about normalization (still a new concept to me but I shouldn't sound like a total rear end in a top hat right now). I have a table that holds users' information (username, password, state, etc etc) and I originally designed this table to include a column for skills (general skills, weaving, calligraphy, programming etc etc). The problem is multiple users can have the same skill and one user also has many skills. It is clear I cannot use skills as a column in the users table because a row could have multiple values for that column. At the same time I unsure of how to make skills its own table. If "skills" was its own table I believe the only columns it would needs are a skill name column and a user name column (plus a description column if I decide to add one but it would at least need the first two I listed). Am I correct in my thinking to create a normalized database?
|
# ? Nov 28, 2012 22:24 |
Mindisgone posted:I have a stupid question about normalization (still a new concept to me but I shouldn't sound like a total rear end in a top hat right now). I have a table that holds users' information (username, password, state, etc etc) and I originally designed this table to include a column for skills (general skills, weaving, calligraphy, programming etc etc). The problem is multiple users can have the same skill and one user also has many skills. It is clear I cannot use skills as a column in the users table because a row could have multiple values for that column. At the same time I unsure of how to make skills its own table. If "skills" was its own table I believe the only columns it would needs are a skill name column and a user name column (plus a description column if I decide to add one but it would at least need the first two I listed). Am I correct in my thinking to create a normalized database? Is a `skill` a unique entity to the user or will users share the same skill entities? If so you might need two tables, a table containing information about skills and a join table to match users with skills. If not, then just one table will suffice.
|
|
# ? Nov 28, 2012 22:36 |
|
^ Users will share the same skill entities. So in a separate skills table it should contain columns: skill name, description, and user name to join the tables? I am a bit lost as to the design of the skills table because I have never had to join tables before but now that I think about it there is no reason to duplicate user names in a column in the skills table. I suppose my back end php code handles the logistics of what users get what skills and the only columns the skills table needs is skill name and description?
|
# ? Nov 28, 2012 23:04 |
|
Users table: user_id user_name [other user info] Skills table: skill_id skill_name [other skill info] Join table: user_id skill_id This lets you define each user and each skill once, and in the linking/join table you can have as many entries as you want. So if user 1 has skills 3, 7, and 10, there would be three rows in that table for him. So, to find out what skills someone has, you would query: SELECT skill_name FROM skills LEFT JOIN user_skills USING (skill_id) WHERE user_id = $i
|
# ? Nov 28, 2012 23:36 |
|
Golbez posted:Users table: Excellent, thank you very much for the quick lesson. I believe I still have some problems though. Specifically users should be able to add as many unique skills to their profile as they please. If we do not have record of the skill the database should add it which I can already do. I just still don't see with the above example how it keeps a permanent record of which users assigned which skills to their profile, some users having the same skills. EDIT: I'm a dumbass and believe some if not all join functions save the joins to another permanent table...right? Mindisgone fucked around with this message at 06:37 on Nov 29, 2012 |
# ? Nov 29, 2012 06:34 |
|
Mindisgone posted:Excellent, thank you very much for the quick lesson. I believe I still have some problems though. Specifically users should be able to add as many unique skills to their profile as they please. If we do not have record of the skill the database should add it which I can already do. I just still don't see with the above example how it keeps a permanent record of which users assigned which skills to their profile, some users having the same skills. The permanent record is the users_skills table. Maybe this will things up a bit: code:
code:
|
# ? Nov 29, 2012 10:13 |
|
KARMA! posted:A great example Ah yes that is very clear now that I can see it, thanks.
|
# ? Nov 30, 2012 00:12 |
|
So I'm taking a databases course and the professor just decided to change the requirements on an assignment we turned in three weeks ago. This here's a trigger I have: (Oracle) SQL code:
|
# ? Nov 30, 2012 00:18 |
|
Sockser posted:So I'm taking a databases course and the professor just decided to change the requirements on an assignment we turned in three weeks ago. You can insert rows in bulk by feeding the INSERT a SELECT statement's results - code:
|
# ? Nov 30, 2012 00:47 |
|
Ah, I guess I should have explained a bit better so we're making some janky-rear end console-based facebook for our final project. This trigger is for when you send a message to a 'group', it find all members of that group and send an individual message to each member. So User1 sends a message to GroupA, and the table I insert needs to look like FROM -- TO User1 -- GroupA:User1 User1 -- GroupA:User2 User1 -- GroupA:User3 etc so I can do a select and get all members of a group, but then when I insert I need to join it with a single value, that is, User1. Is there a way to do that?
|
# ? Nov 30, 2012 00:55 |
|
Sockser posted:Ah, I guess I should have explained a bit better You have a SELECT statement that will give you all the users in the group, so just add another column with the message id to the statement (you don't need to use a cursor), and feed that to the INSERT directly.
|
# ? Nov 30, 2012 01:00 |
|
right, understood, and I'm sure I sound just dumb as hell right now, but how would I go about joining one cell as a column with a table? What's the syntax for doing such a thing? A join would just give me a column full of nulls, wouldn't it? vvv badaboom. Cross join, a thing I don't recall ever being mentioned in class. Sockser fucked around with this message at 03:49 on Nov 30, 2012 |
# ? Nov 30, 2012 01:37 |
|
Outer joins will give you NULL when a value is not found. Inner joins will only return rows with results. This is one of the better join explanations: http://www.codinghorror.com/blog/2007/10/a-visual-explanation-of-sql-joins.html
|
# ? Nov 30, 2012 03:00 |
|
Sockser posted:vvv badaboom. Cross join, a thing I don't recall ever being mentioned in class. Yeah, lots of people don't bother to cover them explicitly. There are two reasons for this: they're equivalent to any other kind of join with a trivially true join condition, and your situation is one of the very few where cross joining is a smart thing to do.
|
# ? Nov 30, 2012 05:22 |
|
Alright, so I loving hate Oracle but what the gently caress is going on hereSQL code:
code:
profile, friends, pendingfriends, messages, messagerecipient, groups, groupmembership so uh... what the gently caress is going on here, then? e: for reference, this works: SQL code:
Sockser fucked around with this message at 19:33 on Nov 30, 2012 |
# ? Nov 30, 2012 19:29 |
|
Sockser posted:Alright, so I loving hate Oracle but what the gently caress is going on here You cannot join to a scalar value, nor is there any need to. Just select from your table and add the scalar as a column on the SELECT query.
|
# ? Nov 30, 2012 19:59 |
|
Well son of a bitch, I made that a lot harder than it needed to be.
|
# ? Nov 30, 2012 20:01 |
There is an sql thread! Wonderful. Using PostgreSQL 8.4, I have the following table: code:
What the table actually contains: check-in/check-out data for a door scanner. There is one small caveat: a user's presence is considered expired after 12 hours if they don't actually check out. Status will always be either IN or OUT. Here's some sample data: code:
(Sorry if this is mis-formatted: the automatic formatter was choking on it so I just tried to reasonably fake it) code:
At this point, I'm honestly considering breaking out the spatial extensions and treating it that way, but I really don't want to go down that road for this. If it helps, I can provide some extra example sql for performing such queries as "how many members are in right now?" Bad Munki fucked around with this message at 02:09 on Dec 3, 2012 |
|
# ? Dec 3, 2012 02:05 |
|
This isn't really the best type of problem to solve with SQL queries, but it's certainly possible and I've had to do something similar in the past. The way I solve this is to add dummy rows to "fix" all of the corner cases by making them simple cases. The key to this is going to be the lead and lag window functions. This will allow you to compare the values on this row to the values on the next row and previous row as needed. I may be missing some, but I think you really have 3 basic cases to consider. I would do the following to fix each, pending your business rules.
I think fixing them in this order will solve your corner cases, but it's difficult to say without really digging into all of the data. I would normally do this by copying the data for this date range, with an extra day on both sides, to a temp table and doing all of the manipulations there since this is strictly for reporting purposes. It will be several SQL statements, 5 if I didn't miss any cases above, to clean up the data, but once you've done that it will make any sort of reporting off the data much easier. The key is just to be sure to do lots of analysis around the actual corner cases in your data before and after these changes to be sure we don't make anything look bad. I like to check users that I'm doing the most manipulations to for making sure my code is working as intended as well as users that I'm doing no manipulations to make sure I'm not missing something in their data. If you need help with a sample statement for one of the corner cases let me know and I'll whip something up later on today when I have some more time.
|
# ? Dec 3, 2012 03:19 |
|
|
# ? May 30, 2024 16:50 |
I can surely clean the data up, I think that's pretty straightforward. I would also consider, when a user checks in, pre-emptively inserting an OUT item 12 hours from then, and just updating it if they check out before the pre-recorded time on that entry. Then, at least, all INs would be guaranteed to have a corresponding OUT, even if it hasn't happened yet. When I was talking about edge cases, it was more in reference to checking for intersections between ranges. To wit: code:
Case B: The intervals do not intersect: the test range is entirely after the target range (isomorphic with case A) Case C: The intervals intersect: the test range ends during the target range Case D: The intervals intersect: the test range begins during the target range (isomorphic with case C) Case E: The intervals intersect: the test range is entirely within the target range. Case F: The intervals intersect: the test range entirely contains the target range (isomorphic with case E) Because of the mentioned isomorphism between some of the test cases, there are really just three cases: one interval is entirely outside the other, one interval partially intersects the other, and one interval is wholly contained within the other. So, that's fine, it'd be a little cumbersome, but a few ORs would take care of it...I'm looking at the link you provided, I think that you're right that if I take the time to clean the data up a little, it may make my job a lot easier. My only concern is that I this data will be queried pretty frequently (and updated arbitrarily often) so I'd be concerned about potential performance issues. Needs to stay fairly snappy. Another option I'm considering is to sidestep the whole problem of deriving the hourly data and just recording it straight up. That is, have a table with nothing but (rounded) timestamps and counts. Then have an hourly cron run that will examine the currently-present users (I have sql for that already) and add a row with that information. One problem there is that it'll only catch people that are checked in for at least an hour. Another problem is that it's less flexible, although a group and sum would at least let me lower the resolution of the data (to, say, per day.) What I'd really like is a select statement that will return two columns, start time and stop time. With those, I could pretty easily apply that information to a generated series of time ranges and use the cases I described above to figure out actual accurate counts on the fly with an arbitrary resolution. I'm hoping that once I understand the lead and lag functionality you linked, that it will lead to that possibility. Oof, I wrote more than intended. Thanks for the help thus far! e: Oh, and I didn't mention, this is a new system and isn't fully live yet, so while I'd like to keep the data collected up to this point, it'd be okay to nuke it as needed. Some of the early data was strictly IN logs, so that may well go away just on good measure. I mean, realistically, I could nuke ALL the data if I wanted, but where's the fun in that? Bad Munki fucked around with this message at 04:10 on Dec 3, 2012 |
|
# ? Dec 3, 2012 03:51 |