|
1) When should business logic be written at the database level versus the programming language level? Or is it something like "always in the DB unless you can't"? 2) My coworker just told me of this technique I've never heard before, so I wanted to know how common place it is. When importing a set of data, like many rows, you create a temporary table that's the same as the table you want to insert the stuff into, then you use some crazy SQL command to import the stuff en masse, and then delete the temporary table. True? What crazy command is this? And you wouldn't just loop over the stuff and INSERT each row/item?
|
# ¿ Dec 19, 2007 16:28 |
|
|
# ¿ May 3, 2024 00:36 |
|
Maybe I'm misunderstanding something. Can INSERT insert multiple rows? So are you INSERTing the SELECTion of the temporary table into the main table?
|
# ¿ Dec 19, 2007 16:51 |
|
Anyone familiar with bulk imports in Sybase? Can I create a table and then use bcp to just tell me the format file for bcp, then use that format file to import more data? Totally new to how all this stuff works...
|
# ¿ Dec 21, 2007 17:07 |
|
Is there a way to programmatically detect the existance of a table, to let's say conditionalize running a create (create if not exist)? This is Sybase I'm working on. Apparently there are stored procedures that list out all the tables, and I guess I could parse those... Are those stored procedures made by my coworkers or are those Sybase built ins?
|
# ¿ Dec 21, 2007 23:37 |
|
Is there an SQL way to programmatically do this or will I just use Perl to parse the output of some lookup command to see if there's a table? And then conditionalize the running of that CREATE sql?
|
# ¿ Dec 22, 2007 01:36 |
|
imBen posted:I have no idea what you're TRYing to ask. I really don't think you're asking if INSERT can do this? ... Because of course it can. I've never done a lot of INSERT work so I guess I never learned... m5 posted:Isn't there a "sysobjects" table, or something like that? That's what we do in SQL Server. Yes! This worked. I can just select the name from that table and parse that in Perl space...
|
# ¿ Dec 27, 2007 18:20 |
|
How do I limit the number of rows I get for a query in Sybase? In Oracle I surrounded my query with a select and used rownum, and in MSSQL I used TOP... Edit: My coworker mentioned something about setting a variable and then unsetting it, but that sounds suspcious and sloppy. Edit^2: Well, "set rowcount N" and then "set rowcount 0" (to reset) works, but again, that feels wrong. Triple Tech fucked around with this message at 17:51 on Jan 4, 2008 |
# ¿ Jan 4, 2008 17:47 |
|
I thought of a radical idea on the train today. Brace yourselves. Do you template your SQL? Every time I see SQL intertwined with code (Perl in this case), I can't help but think... God. That is so ugly. Something is wrong... But I'm not sure what. Then it dawned on me on the train... The concern can be seperated out with a templating engine. SQL, like HTML, is hardly ever useful in a 100% static form. Requests and desires are generated on the fly. HTML is easily handled by something like Mason (templating for Perl). So, shouldn't SQL be easily handled as well in a Mason context? The idea seems far too radical, at least for me. In all four of my SQL-aware jobs, not one of them seperate the SQL out into different files. I feel like I'm really breaking ground with this one.* What do you guys do, do you seperate out your SQL like you seperate out your HTML? Or does it just get jammed right into the code? * Not really, I'm just being hyperbolic.
|
# ¿ Jan 25, 2008 17:17 |
|
Can I do this? I know I can just make it two update statements with a where clause, but I wonder if a single update is any better. This is SQL Server 2005. And Victor already showed me how to exclude a record from a sum using case else 0, but how do you exclude something from a count? The documentation says count all will exclude nulls, but Victor says he's never seen all used that way. How else would you use it? code:
|
# ¿ Apr 14, 2008 20:51 |
|
Aredna posted:
What does the comma 1 part do? I understand what max does, but not with multiple arguments in the expression.
|
# ¿ Apr 14, 2008 21:20 |
|
Every day is like a rapid fire of dumb SQL questions to Victor and #cobol... Here's my current one for SQL Server. Is there a way to store the value of a column into a variable? In Perl-speak, I would like this.code:
|
# ¿ Apr 18, 2008 17:01 |
|
My company's web app let's customers run calculations that are Expensive. How can we mitigate this cost with caching? Scenario 1 - The result of the calculation is of different sizes and dimensions each time, currently represented as a Perl data structure. Current solution, serialize it and store that string in the DB. Can it be done any other, better way? Scenario 2 - The results of the calculations are the exact same size each time, maybe a different number of rows, but still the same columns no matter how you slice it. Can we just dump this data into a table and then make the PK somehow related to the inputs that the customer put in for the calculation? If already calculated, read cache, if not, calculate + store?
|
# ¿ May 6, 2008 17:41 |
|
What's the difference between a relationships table (relationship id + object ids) versus a table with composite primary keys? I'm thinking, like... If it's just a join, where each object has its own properties, and only the relationships need to be defined, then a relationships table is fine. But then it's like, what if the relationships themselves have data that neither of the objects can provide independently, then you start attaching data onto the relationships table. Now, this relationships table, do you implement yet another relationships xref table and then a table about the relationships, or do you just have a composite primary key in the relationships table (instead of a relationship id)? code:
Triple Tech posted:My company's web app let's customers run calculations that are Expensive. How can we mitigate this cost with caching? No takers on how to cache calculated data of varying sizes? Triple Tech fucked around with this message at 20:07 on May 8, 2008 |
# ¿ May 8, 2008 20:01 |
|
When I get crazy queries that are hard to figure out, I ask myself, what is the primary concept I want vs what is joined. Primary concept: Rows where type != ACK Join concept: Time Join condition: ACK num = ACK num (selecting from ACKs only) So, in pseudo code... code:
|
# ¿ May 21, 2008 13:47 |
|
select max(score) from table group by name order by name, untested
|
# ¿ Jun 10, 2008 04:39 |
|
Is there a way to change a column from decimal 5 to decimal 5, 3 in one command or do I have to grow it to 8, 3, divide by 1000, and then shrink it?
|
# ¿ Jul 9, 2008 03:16 |
|
Is a TRUNCATE, DROP faster than a DROP in SQL Server? The reason I'm guessing so is because with a drop, it would try to record everything in the transaction log. But if I truncate it first, there would be nothing to record, ergo, faster.
|
# ¿ Jul 15, 2008 18:57 |
|
DLCinferno posted:I've done them on large tables (500 million rows or more) so I know from experience. Just curious, how many columns were in this table?
|
# ¿ Jul 15, 2008 23:12 |
|
Our instance of SQL Server 2005 (we are not technically adept/DB mastahs) had 130 columns over 4M rows, and it took over a minute to drop. So, I figured it was trying to log everything. The log is already set to simple.
|
# ¿ Jul 16, 2008 19:33 |
|
Yeah, obviously if I was at the helm, there would be a lot more judicious use of what goes where, etc. But they're sort of playing a shotgun approach to the way we're modeling the data, so it just so happens that one slice of our in-house model has 130 possible dimensions. The vast majority of the fields are null, and I can scream audit until the cows come up, but that's just the way it is. The data that we're getting from other people though, 80 columns, easy. And we're still talking a few million rows, per month.
|
# ¿ Jul 16, 2008 22:12 |
|
permanoob posted:Would this be the place to ask questions about the SQL server package itself? I'm guessing not, but I figured it's safe ro post here first asking, than to make a new thread in SH/HC. You might as well try, it couldn't hurt. Worst case scenario, you make a thread.
|
# ¿ Jul 25, 2008 21:43 |
|
Victor posted:Your CTE also needs to be made into dynamic SQL and executed as such. I don't understand, what is the benefit to using dynamic SQL versus me just stitching a query up client side and then running it? It seems pretty much the same, to me.
|
# ¿ Aug 20, 2008 19:39 |
|
Edit: Wow, not an SQL question...
Triple Tech fucked around with this message at 22:21 on Aug 20, 2008 |
# ¿ Aug 20, 2008 21:05 |
|
I don't know much about database design, but I'm not convinced the standard stored procedure benefit still comes through with dynamically generated SQL. It's like eval'ed code in a programming language, it's not semantically relevant up until the point that it's run, because it can always change before then. This isn't something predictable like bind parameters. That's all I was trying to get at.
|
# ¿ Aug 22, 2008 04:30 |
|
Let's say I have a table called goons and each row in the table represents one goon with typical goon attributes like user_id, last_login, etc. Let's say each goon can be categorized into only one goon_type, like where they hang out most often (TCC, GWS, TFR, etc). In the OOP sense, think subclasses. Now, what if each subclass has attributes exclusive to that subclass? favorite_weapon, favorite_drug, favorite_food, etc. How do you model this? Do you put all of those fields in the goons table and NULL them out where they don't apply? Or do you keep the goons table generic and make tables to join off to like goons_tfr, goons_gws, etc that consist only of the goon pk plus the exclusive attributes?
|
# ¿ Sep 18, 2008 17:19 |
|
This is another prose-y question... What's the best way I can update multiple columns/rows of data? I'm only familiar using the UPDATE statement when it's a single column, limited by a WHERE clause using an id or pseudo id. My scenario is there's a table we have with columns ABCDE. At the first phrase of loading data, I can only fill in columns ABC, and I'll null out DE. In a second phase of loading, I'll have access to the same ids and whatnot, but I can fill in the DE sections. So, what SQL statement or design can I use to fill out those columns in an existing table? Edit: Talking with my coworker, this seems like something a normal UPDATE statement can handle, with joins and what not... I guess I'll look into it. Triple Tech fucked around with this message at 14:51 on Oct 14, 2008 |
# ¿ Oct 14, 2008 14:01 |
|
Question to the peanut gallery at large with regards to the question just posted: isn't this infinite attributes model sort of discouraged? It doesn't take advantage of relational structure at all. Or does it? It just smells really wrong to me. Edit: It's called the entity attribute value model.
|
# ¿ Nov 4, 2008 23:20 |
|
What is the alternative to a workflow that would like to support ad-hoc fields? It's still a pertinent, albeit ugly, question.
|
# ¿ Nov 5, 2008 01:46 |
|
But why not model them "the normal way"? What's so insanely beneficial to this setup? It's like the cost of adding a new field completely outweighs the cost incurred by modeling it properly? And that includes maintenance costs and complexity for both?
|
# ¿ Nov 5, 2008 18:53 |
|
I'm not convinced that this is the "right" or most effective thing to do (just saying, no convincing needed). Having 600 attributes to one object just smells like an attempt to model what are really heterogeneous objects whose dimensions are skew and have nothing to do with one another.
|
# ¿ Nov 5, 2008 19:47 |
|
So you have a base table called "evidence" and then you have a knives table, a guns table, a clothes table, that all join off this evidence table. The only weakness to this system is the inability to address completely brand new types of evidence, which I feel is the only scenario where EVA shines, and it doesn't happen often enough, I feel, to merit using it anyway.
|
# ¿ Nov 5, 2008 20:11 |
|
You don't union heterogeneous objects, that's what the evidence table is for. It has all of the pieces of evidence in the entire universe. The only thing in the other types is information specific to that piece of evidence. It's the whole "let's treat things that are wildly different as the same without factoring out the similarities" angle that I find ridiculous.
|
# ¿ Nov 5, 2008 23:03 |
|
drcru posted:What would cause a table overhead when you're updating it? Victor has a really great point. Define what you mean by overhead. The update is taking longer than usual? What else was going on at the time? etc. Essentially anything related to better read performance would correlate to worser write performance (and vice versa).
|
# ¿ Nov 24, 2008 18:03 |
|
Let's say I have a table of entities, and a table of relationships based off of those entities. Can I have one constraint on each table that dictates "an entity cannot exist unless it has at least one relationship"? If so, when I had a new entity, do I have to release the relationship constraint first? Or can something be added to both tables "simultaneously"? Like, can they be wrapped in a transaction? Will the constraint only be enforced on the conclusion stage of the transaction?
|
# ¿ Nov 24, 2008 23:20 |
|
Let's say I have a table that represents a super class, students. And then I have N tables that represent subclasses of that object (athletes, musicians, etc). How can I express a constraint such that a student must be modeled in one (not more, not less) subclass? Edit: Cross posted to Stack Overflow for maximum hypernature Triple Tech fucked around with this message at 00:54 on Dec 2, 2008 |
# ¿ Dec 2, 2008 00:22 |
|
Uhh, not sure how I can do that considering I wasn't in the room at the time. I did think of a solution though, I'll post it when I implement it. Edit: The superclass table has a subclass code column. Each of the subclass tables has a foreign key constraint, as well as one that dictates that the id exist with a subset of the superclass table (where code = athlete). The only missing part here is that it's possible to model a superclass without a subclass. Even if you make the code column mandatory, it could just be an empty join. That can be fixed by adding a constraint that the superclass's ids exist in a union of the ids in the subclass tables. Insertion gets a little hairy with these two constraints if constraints are enforced in the middle of transactions. That or just don't worry about unsubclassed objects. Triple Tech fucked around with this message at 03:13 on Dec 2, 2008 |
# ¿ Dec 2, 2008 02:59 |
|
In SQL Server 2005, how do you check if a temporary object (table or procedure) already exists? They seem to lack object id's (rather, object_id returns null). Edit: It seems tempdb..sysobjects has them listed... Sort of. Edit^2: And it seems that their fully qualified names for object_id to work are tempdb..#table Triple Tech fucked around with this message at 00:32 on Dec 13, 2008 |
# ¿ Dec 13, 2008 00:28 |
|
What database system is this that something like that takes over 40 seconds?
|
# ¿ Dec 15, 2008 20:56 |
|
I guess I'm still very new to databases, and I'm not exactly how powerful our hardware is (and it might be since THE MANAGERS just like to throw money at the problem instead of make things The Right Way (tm)). But in my limited experience, anything that's over a couple seconds is suspect to being approached incorrectly, and I work with tables that are at least in the hundreds of thousands rows range. Tens of seconds is just... Crazy to me. Chalk one up for Microsoft and SQL Server, I guess. The longest part of the process I'm working on is selecting (almost) an entire table of data (like four million rows) and "translating" it to another format for insertion in a different table. That takes like... 13 minutes?
|
# ¿ Dec 15, 2008 21:24 |
|
|
# ¿ May 3, 2024 00:36 |
|
What is type 2? Kind of difficult to Google for, I'm not sure what I'm looking at.
|
# ¿ Dec 15, 2008 22:11 |