|
I wrote a bunch of stored procedures for SQL Server 2008 that relied on user-defined table types to pass tables to stored procedures. The code now needs to be ported to SQL Server 2005, which does not have user-defined table types. After a lot of googling and a lot of it looks like the best way to change the user-defined table type parameters to XML parameters. I am trying to find a guide to extracting data from an XML parameter that does not suck. Every example I've found is either going the wrong way, or was written to a trivial XML scheme that really can't represent a table (i.e. assumes a static number of rows, etc.) About the best way to do this though, ideally there is a trivial way to take the xml representation of a C# ADO Datatable, and some procedure in SQL Server 2005 that can interpret that xml and reconstruct the table. Chuu fucked around with this message at 23:56 on Oct 1, 2010 |
# ? Oct 1, 2010 23:51 |
|
|
# ? May 23, 2024 23:22 |
|
Has anyone written a basic gambling game with user accounts and database results/records that they'd want to share. I'm playing with a gambling idea on some forums I chat on. It's not real life money so it doesn't have to be perfect.
|
# ? Oct 2, 2010 00:14 |
|
Veskit posted:OK ONE LAST THING. I'm getting redundant information in the available column, so how do I get it to exclude all results as available if it already has a serial number in the unavailable column? I don't think I understand the question. Can you post some sample data and what you want it to look like ideally?
|
# ? Oct 2, 2010 03:45 |
|
Chuu posted:I wrote a bunch of stored procedures for SQL Server 2008 that relied on user-defined table types to pass tables to stored procedures. Erland Sommerskog's article on the pros and cons of various ways of passing TVP-type data in SQL Server 2005: http://www.sommarskog.se/arrays-in-sql-2005.html, including using XML and other ways of passing in table-like data to stored procedures or functions.
|
# ? Oct 2, 2010 09:43 |
|
I had to add a new UID column to all my existing tables, so basically a Varchar(36) in a MySQL database. No problems there, but I need to generate values for all the existing rows (the app will handle it for new rows). I tried to run this query: update some_table set uid = uuid(); Which seemed to work, but in fact it generated just one uid and set all the rows equal to that. How can I force this method to generate a new value for every row? Or can I do it with a different query? There are too many rows to do it manually.
|
# ? Oct 3, 2010 21:31 |
|
FateFree posted:I had to add a new UID column to all my existing tables, so basically a Varchar(36) in a MySQL database. Ah well you could select insert into a new table with auto generate uids turned on and then update back to the original table.
|
# ? Oct 3, 2010 23:03 |
|
Sprawl posted:Ah well you could select insert into a new table with auto generate uids turned on and then update back to the original table. That might work but it would take forever with all these tables. Im hoping someone knows some clever way. Unless there is a query that could duplicate a table somehow?
|
# ? Oct 3, 2010 23:24 |
|
Well im an idiot the values were unique after all. They just looked startingly similar at first glance.
|
# ? Oct 4, 2010 00:22 |
|
Out of morbid curiosity, are you using that UUID as a primary key, and are you also using InnoDB? If so, you should be aware that InnoDB stores rows on disk in the order of the primary key. Other databases call this a "clustered index." UUIDs are not guaranteed to be anything near sequential, so you may encounter a performance hit when entering new rows if things need to be shuffled around.
|
# ? Oct 4, 2010 03:12 |
|
Also, you described it as "basically a Varchar(36)..." Looking up UUIDs it looks like they're unvarying in length, so a CHAR(36) would presumably be better.
|
# ? Oct 4, 2010 08:06 |
|
McGlockenshire posted:Out of morbid curiosity, are you using that UUID as a primary key, and are you also using InnoDB? Actually im doing a hybrid approach where im using an autogenerated id as the primary key, and the uid as just a unique value for every row. This lets me have a unified hashcode/equals method thats safe to work with hibernate collections.
|
# ? Oct 4, 2010 11:27 |
|
Knightmare posted:In Oracle, you can prompt for values with the following code: Anyone know about this? I'm guessing it's not possible due to not being able to find anything about it, but I'd settle for even a semi-complex workaround.
|
# ? Oct 4, 2010 17:01 |
|
Knightmare posted:Anyone know about this? I'm guessing it's not possible due to not being able to find anything about it, but I'd settle for even a semi-complex workaround. This is the closest i could find. http://www.enterprisedb.com/docs/en/8.3R2/oracompat/EnterpriseDB_OraCompat_8.3-226.htm quote:11.1.2 Command Summary
|
# ? Oct 4, 2010 17:37 |
|
Hammerite posted:Also, you described it as "basically a Varchar(36)..." Looking up UUIDs it looks like they're unvarying in length, so a CHAR(36) would presumably be better. I was about to say "except when MySQL decides that your CHAR should be a VARCHAR because it's loving retarded and knows better than you", but I couldn't make it do that in 5.1. When did they fix this, it makes me happy. e: They fixed that in 5.0. Amazing.
|
# ? Oct 4, 2010 18:05 |
|
FateFree posted:I had to add a new UID column to all my existing tables, so basically a Varchar(36) in a MySQL database. This might seem like an odd question, but why did you choose Char/VarChar instead of Int if the IDs are all synthetic anyway? The impression I always got was int was better for indexing, searching, etc. anyway.
|
# ? Oct 4, 2010 23:38 |
|
Scaramouche posted:This might seem like an odd question, but why did you choose Char/VarChar instead of Int if the IDs are all synthetic anyway? The impression I always got was int was better for indexing, searching, etc. anyway. Well like I mentioned the primary key is still an auto generated int id, thats what I use for all searching and joining and such. This uid is really only used as a reliable hashcode value since it can be generated when a new object is created. I should have made it a char instead of varchar however.
|
# ? Oct 5, 2010 00:01 |
|
I wanna do this:code:
what is the work around here ? It is how it stands in the database
|
# ? Oct 5, 2010 15:51 |
|
Dsan posted:I wanna do this: You could add \ to the double quotes, or just use http://us2.php.net/mysql_real_escape_string.
|
# ? Oct 5, 2010 15:56 |
|
yeah, got it now. thanks Now I just need drupal to come back to life on me XD
|
# ? Oct 5, 2010 16:14 |
|
I'm creating a site to track workouts, purely as an exercise to learn asp.net mvc. My problem relates to database design, though. I want to allow comments on a number of different pages. For example, on a user profile page, on a page detailing a specific workout, on a page detailing body weight, whatever. Each of these pages is more or less coupled to a specific table in my database. However, I don't want to create a separate comment table for each type of page. I've solved this by creating a separate table containing only an id, and then a table containing the comments themselves. I then use a nullable foreign key in each table I want to associate comments with to the id table, and can then perform queries on the comments table itself with that id. Is this a stupid way to do this? Edit: Only one table implemented like this here, but it should get the concept across. wtcurtis fucked around with this message at 00:41 on Oct 8, 2010 |
# ? Oct 8, 2010 00:38 |
|
I don't understand why you don't just have a page ID field in the comments table?
|
# ? Oct 8, 2010 02:55 |
|
baquerd posted:I don't understand why you don't just have a page ID field in the comments table? See, that's the first thing I thought of. But the problem I see there is that I have a number of different types of pages, each of which is related to a different table in the database. However, I want to simply store all the comments in one table, so there could be any number of comments in that table with non-unique page id fields (how could I tell the difference between, say, the user profile with id 5 and the lift session with id 5? Although I suppose I could just break down and use a uniqueidentifier instead of an int for the pk for all those tables...). That's why I went for the extra known-unique id table. It's entirely possible I'm misunderstanding the problem, though. Here's a more accurate example, with three different tables allowing comments.
|
# ? Oct 8, 2010 03:34 |
|
wtcurtis posted:See, that's the first thing I thought of. But the problem I see there is that I have a number of different types of pages, each of which is related to a different table in the database. However, I want to simply store all the comments in one table, so there could be any number of comments in that table with non-unique page id fields Take a look at this comment table: User ID Page posted on Post date Comment text Yes, it's not normalized and has a poor key for that, but as far as indexing is concerned it'll be speedy and light. Here's how I'd select all comments for a page: code:
|
# ? Oct 8, 2010 04:59 |
|
Is it poor practice to select from views within other views? Or can this be a good thing for code maintainability? I've come across it quiet a bit on my current project and I really can't find a good reason to do it. Any thoughts?
|
# ? Oct 8, 2010 06:49 |
|
A very bad man posted:Is it poor practice to select from views within other views? Or can this be a good thing for code maintainability? I've come across it quiet a bit on my current project and I really can't find a good reason to do it. Any thoughts? Views can be indexed in advance.
|
# ? Oct 8, 2010 06:53 |
|
A very bad man posted:Is it poor practice to select from views within other views? Or can this be a good thing for code maintainability? I've come across it quiet a bit on my current project and I really can't find a good reason to do it. Any thoughts? In any even halfway decent SQL engine views are at worst essentially free. There's no reason not to use them where clarity calls for it.
|
# ? Oct 8, 2010 07:03 |
|
ShoulderDaemon posted:In any even halfway decent SQL engine views are at worst essentially free. There's no reason not to use them where clarity calls for it. But is he using a "halfway decent SQL engine"? MySQL for example apparently is very bad at optimising views and cannot index them either. See this post.
|
# ? Oct 8, 2010 08:22 |
|
Hammerite posted:But is he using a "halfway decent SQL engine"? MySQL for example apparently is very bad at optimising views and cannot index them either. See this post. Frankly I'm of the opinion that people who use MySQL deserve what's coming to them. Seriously, it's depressing but somehow not particularly shocking that MySQL manages to turn some queries involving views into something more difficult to execute than the result of simply inlining the definition of the view into the query. I'd rather advise people to change to a better free database than encourage them to code around such madness.
|
# ? Oct 8, 2010 08:39 |
|
ShoulderDaemon posted:Frankly I'm of the opinion that people who use MySQL deserve what's coming to them. I'm dealing with Oracle 11g. I understand the views being selected by views X 4 are really no performance cost, but honestly it turns into a horrible mess.
|
# ? Oct 8, 2010 14:32 |
|
ShoulderDaemon posted:In any even halfway decent SQL engine views are at worst essentially free. There's no reason not to use them where clarity calls for it. It's hard to know where to start in describing what's wrong with this statement.
|
# ? Oct 8, 2010 16:34 |
|
MoNsTeR posted:It's hard to know where to start in describing what's wrong with this statement. You can always take a query involving a view, and inline the definition of the view as a subselect. If your view doesn't involve aggregates or anything else fancy like that, you can simply add the conditions of the view to the conditions of the query with an AND. It's hard to imagine how this inlining could add significant cost to the execution plan for the query. If you write the same query twice, once using a view and once inlining the definition of the view, the view shouldn't ever be slower because it's still the same query. The view might be faster than the inlined form if, for example, your database engine lets you amortize some costs by materializing views or creating better indexes.
|
# ? Oct 8, 2010 17:52 |
|
ShoulderDaemon posted:You can always take a query involving a view, and inline the definition of the view as a subselect. If your view doesn't involve aggregates or anything else fancy like that, you can simply add the conditions of the view to the conditions of the query with an AND. It's hard to imagine how this inlining could add significant cost to the execution plan for the query. If you write the same query twice, once using a view and once inlining the definition of the view, the view shouldn't ever be slower because it's still the same query. The view might be faster than the inlined form if, for example, your database engine lets you amortize some costs by materializing views or creating better indexes. There's a freaking science to optimizing views, I'm still coming to terms with it. I've been recently rewriting views which do use 'full table access' rather than using the index on production, but uses the index on development due to a huge difference in data. Inlining tends to be great when you only need a single value from another table, like a reference table value.
|
# ? Oct 8, 2010 22:15 |
|
Trying to run this select:code:
code:
What am I doing wrong?
|
# ? Oct 11, 2010 10:02 |
|
You are joining a table called Customers and it tries to find the columns in that table. So Customers.FreeOrders.Discount is not meaningful in this context. If you have a table called FreeOrders (owned by Customers, which i doubt) it has to be in the join too.
|
# ? Oct 11, 2010 10:06 |
|
woptsufp posted:You are joining a table called Customers and it tries to find the columns in that table. Right, I'm so stupid. Thanks!
|
# ? Oct 11, 2010 10:10 |
|
Is there any way in MySQL to give priority to queries involving a certain table or database? I have a server that originally just hosted files, then later was also turned in to a basic email server for a voicemail system. The email setup uses MySQL for its user database, just a few small tables which are very rarely changed and basically just respond to a SELECT when someone receives a voicemail or accesses their account. Recently due to another site being taken down we moved over a very large database with millions of rows in one large table (yes it's the same terribly designed one I've asked a few questions about in the past). Unfortunately, we've discovered that doing almost anything outside of simple INSERTs and queries that hit the indexes properly on this table seems to bring the entire server to a halt and results in users being unable to access their voicemail until the query finishes. A new server is being purchased to permanently store the large database, but until that is available I'm looking for any stopgap solutions which will allow the mail server's database to have priority over everything else. I don't care if queries on the big table are even entirely halted every time something comes in on the voicemail, nothing against the big one is time sensitive. So far I've found plenty of ways to limit the number of queries a user can issue in a given time or what can be queried, but this table is so large that a single very simple non-index SELECT against it is enough to kill the server so none of those are helpful.
|
# ? Oct 15, 2010 21:27 |
|
Only thing i could think of quickly is maybe install a separate instant of mysql to run only the large table.
|
# ? Oct 18, 2010 01:33 |
|
That won't help much if it's limited server resources that are the problem. One instance could easily consume all available I/O. But if the other database is small enough to be kept in RAM, and both instances are tuned properly, it could work out.
|
# ? Oct 18, 2010 17:27 |
|
McGlockenshire posted:That won't help much if it's limited server resources that are the problem. One instance could easily consume all available I/O. My best guess is that it's disk I/O, since the server has a single hard drive and the big table is about 40GB with dozens of varchars. The voicemail database is incredibly tiny (2x 16KB + 1x 176KB tables) and basically static, so keeping it entirely in RAM seems like it would do the job. What's the best way to achieve this? MEMORY table with an --init-file script that loads from the on-disk tables when the server starts?
|
# ? Oct 19, 2010 16:30 |
|
|
# ? May 23, 2024 23:22 |
|
Let's say I have a status history table, with statuses like available, not available, fired, etc. Now let's say that sometimes statuses are put in more than once in a row:pre:User ID | Status | Date | Notes ---------------------------------------------------------- 1 | AVAIL | 10/10/10 | Confirmed availability 1 | AVAIL | 08/01/10 | 1 | NOTAVAIL | 07/02/10 | On extended vacation 1 | AVAIL | 02/05/09 | I wouldn't be averse to a solution that also involved PHP. I'm thinking, maybe ... select them ordered by date, then loop through PHP until I hit one that doesn't match the most recent status, then get the previous date? I'd like a SQL solution, but I guess a PHP one could work in the meantime. Golbez fucked around with this message at 22:28 on Oct 19, 2010 |
# ? Oct 19, 2010 22:16 |