|
We have a system with a transactional database, which is being replicated to a reporting database for performance. To minimize the changes required, our DB guy is going to set up some sort of linked server between the two, because the app can only connect to one data source for working with the transactional stuff as well as pulling reports. I have to modify all the report queries in the app (yes the sql is hardcoded in the app, rather than stored procs, no I can't change this yet) to reference the tables on the linked server rather than the transactional db. This is fine for the most part, but I just came across a report that's retrieving a sequence.nextval from dual. Can I reference a sequence.nextval from dual over a linked server? This is using Oracle 10g.
|
# ? Nov 23, 2010 15:37 |
|
|
# ? Jun 8, 2024 08:04 |
|
Sprawl posted:A select insert with a Case statement on that field switching it to the new value. I couldn't get that to work the way I wanted, but I figured out another way. Thanks!
|
# ? Nov 23, 2010 15:49 |
|
I'm pretty used to using Microsoft SQL Server with my job, but I'm playing around with MySQL. I have a database, and I've set it up all fancy, if I want to view the SQL to create the schema for this entire database, (PK's, FK's, Tables, etc etc), how would I get that out of the MySQL workbench? I'm feeling a bit lost in the unfamiliar environment = /
|
# ? Nov 24, 2010 03:04 |
|
CCrew posted:I'm pretty used to using Microsoft SQL Server with my job, but I'm playing around with MySQL. I have a database, and I've set it up all fancy, if I want to view the SQL to create the schema for this entire database, (PK's, FK's, Tables, etc etc), how would I get that out of the MySQL workbench? I'm feeling a bit lost in the unfamiliar environment = / I never use MySql Workbench (only EMS MySQL manager which IMO is a poo poo ton better), but you can do this using mysqldump and the --no-data switch from the command line. http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
|
# ? Nov 24, 2010 03:35 |
|
Begby posted:I never use MySql Workbench (only EMS MySQL manager which IMO is a poo poo ton better), but you can do this using mysqldump and the --no-data switch from the command line. That seems to have a lot more than I need, I'm just looking for info on table definitions, nothing about the actual data within the table. In other words, if I were to get the SQL for the database I set up, I could set up an identical empty database elsewhere. I'll look into this in the meantime though, thanks!
|
# ? Nov 24, 2010 04:01 |
|
Ah, I found there is a "Reverse Engineer" option, that I used to reverse engineer my database...then forward engineer into a SQL script. Probably a way to complex method, but it works!
|
# ? Nov 24, 2010 04:20 |
|
CCrew posted:In other words, if I were to get the SQL for the database I set up, I could set up an identical empty database elsewhere. http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_no-data posted:--no-data, -d
|
# ? Nov 24, 2010 06:25 |
|
beuges posted:We have a system with a transactional database, which is being replicated to a reporting database for performance. To minimize the changes required, our DB guy is going to set up some sort of linked server between the two, because the app can only connect to one data source for working with the transactional stuff as well as pulling reports. I thought the answer to this was no, but apparently I was wrong (assuming that by "linked server" you're talking about a dblink). code:
|
# ? Nov 24, 2010 07:04 |
|
You can also reference it as my_sequence.nextval@mydblink, e.g.code:
|
# ? Nov 24, 2010 17:05 |
|
MoNsTeR posted:Edit to add: doing complex queries over dblinks is a Bad Idea and should be avoided if at all possible. I expect it's not possible in your situation, but just be aware you're swimming into troubled waters. So doing a few dozen joins over dblinks on tables with millions of rows might cause hassles? This entire system needs to go into the coding horrors thread. Thank goodness my team inherited it from another region rather than bearing the shame of having written it themselves. Thanks for the help though. I'll see how badly things turn out soon enough.
|
# ? Nov 24, 2010 19:45 |
|
CCrew posted:Ah, I found there is a "Reverse Engineer" option, that I used to reverse engineer my database...then forward engineer into a SQL script. Probably a way to complex method, but it works! What I gave you is exactly what you wanted, you simply run mysqldump with the --no-data switch and it dumps the table structure, with no data, as SQL. "mysqldump --all-databases --no-data > myfile.sql" Also, there is a program from EMS that will convert table structure and/or data from MySql to SQL Server and vice-versa. http://www.sqlmanager.net/en/products/datapump I am sure there are other programs out there that do the same thing.
|
# ? Nov 24, 2010 19:48 |
|
I've been trying to find an answer to my question in the MySQL reference site but I still can't get this to work. I have a database with lets say two tables, table ONE and table TWO. I need to create a trigger that updates table two when a row in table one is updated. What I have is something like this: code:
PS. I need the update on table ONE to be added to the population in table TWO.
|
# ? Nov 29, 2010 01:09 |
|
Santa is strapped posted:I've been trying to find an answer to my question in the MySQL reference site but I still can't get this to work. I assume if you need to link the two, then add to the population it would be something like: code:
|
# ? Nov 29, 2010 18:38 |
|
I have a table with a composite primary key, which consists of two foreign keys which reference the same primary key on another table.code:
I also want it to cascade deletions for person2 as well but this is not allowed by SQL Server (may cause cycles or multiple cascade paths). Do I have to make triggers to enforce these constraints?
|
# ? Nov 30, 2010 09:38 |
|
THC posted:I have a table with a composite primary key, which consists of two foreign keys which reference the same primary key on another table. Just constrain the Friends table so that person1 is strictly less than person2. It means when you insert you only have to do a marginal bit of extra work and in exchange you prevent duplicate edges in your friendship graph. I don't know what to do about your cascade problem, though.
|
# ? Nov 30, 2010 09:45 |
|
I have some SQL that looks like this:code:
I'm converting this app to use MySQL and while its fine for the most part statements like these make MySQL cry. I can fix things by moving the second last line back up beside the join on the third line. My instincts say this isn't going to affect the result but I honestly haven't seen this before. Is this something that MSSQL just lets slide or am I looking at some sort of very fancy MSSQL specific join? 8ender fucked around with this message at 05:54 on Dec 2, 2010 |
# ? Dec 2, 2010 05:51 |
|
I just installed MySQL Server 5.1 to learn on. What's the best free resource you know of for learning sql? It would be really nice if it had example data and a way for me to test myself as well, like sql code that I would first have to try to create and then could check by running the correct code, kind of like the answers at the back of the book.
|
# ? Dec 3, 2010 07:25 |
|
8ender posted:I have some SQL that looks like this: Only use On right after the Join for the specific table otherwise move it from an ON statement down to there where. The On statement should only apply to the join right before it with 2 on statements like that it shouldn't even work. code:
|
# ? Dec 3, 2010 18:45 |
|
bloodsoup posted:I just installed MySQL Server 5.1 to learn on. What's the best free resource you know of for learning sql? It would be really nice if it had example data and a way for me to test myself as well, like sql code that I would first have to try to create and then could check by running the correct code, kind of like the answers at the back of the book. If you don't already know about relational databases, the best place to start is learning about table structure and how to properly represent data in multiple tables. You should try to get a good grasp on what normalization means and the different forms. This knowledge is very important and carries over to any relational type database. There are a lot of books out there on database theory and you don't necessarily need a database in front of you to learn it. After you get that, then try and figure out MySql, which will probably be easy for you to pick up at that point.
|
# ? Dec 3, 2010 19:03 |
|
THC posted:I have a table with a composite primary key, which consists of two foreign keys which reference the same primary key on another table. Changing to CHECK ( person1 < person2 ) is completely valid, but if you might not have control over the order the names are inserted, you can use a unique compound computed column with relatively minimal additional overhead as well. It's really a matter of which way is likely to generate less exceptions in your application. Unfortunately, the only way to get around the cyclic cascade issue is to create a trigger. Fortunately, it's not that big of a deal. code:
8ender posted:I have some SQL that looks like this: While I feel that the original query is abhorrent, they're still equivalent for the inner joining. Just make sure to look for differences in the execution plans when you modify the queries you're exporting. It might be a good idea to do that anyway as MS SQL and MySQL sometimes deal with things that look exactly the same quite differently, like CROSS JOINs; MySQL, MS SQL code:
Avarkx fucked around with this message at 20:07 on Dec 3, 2010 |
# ? Dec 3, 2010 20:03 |
|
Sprawl posted:Only use On right after the Join for the specific table otherwise move it from an ON statement down to there where. I definitely know where the ON should go, thats what was confusing me. I thought I might be looking at some sort of MSSQL specific voodoo happening. Good to know I was just looking at a poorly written statement. It looks like MSSQL is as loose about the rules with SQL as Internet Explorer is with HTML markup. Avarkx posted:While I feel that the original query is abhorrent, they're still equivalent for the inner joining. Just make sure to look for differences in the execution plans when you modify the queries you're exporting. It might be a good idea to do that anyway as MS SQL and MySQL sometimes deal with things that look exactly the same quite differently, like CROSS JOINs; MySQL, MS SQL Thankfully the queries aren't that sophisticated for the most part. Only real problems I've run into so far is places where the TOP and IsNull commands are being used. MySQL has a pretty straightforward alternative for those. The queries where I've moved the ON statements back beside the join seem to be returning the same result. The whole app seems faster on MySQL but that may be because I added all the indexes and foreign keys it should have had in the first place. The exercise is giving me a healthy appreciation for ORMs though. So many of these queries were simple enough that they really should have been handled by decent ORM. A nice PHP one was installed and then just used to run straight SQL. Bummer.
|
# ? Dec 4, 2010 17:49 |
|
So I'm learning to use mySQL to ease my work with the NCBI databases. The problem is they are quite large, and my queries against them are throwing errors left and right. What I have is my own database, with the tables from NCBI loaded into them (roughly 20mil rows, 1.8gb for one). Against that, I have a simply query, no issues there. My problem is that I keep on getting two errors any time my queries go longer then 600 seconds against the server, namely errors 2006 and 2013 Lost connection to server and server has gone away. Seeing as how the server is literally my desktop and I'm right on it, this seems a bit weird. All of my queries are over 600 seconds long, so this is a problem. Weirder, I can go into my admin panel and see the query still executing, so as a stopgap I'm dumping them into tables and pulling those back out later. I know it's something trivial to do with my setup, but I can't find anything recent on the internet and, well...here I am. I'm using mySQL workbench 5.1.53 and mySQL server 5.5 (I think). Appreciate the help, and hope I'm in the right place .
|
# ? Dec 6, 2010 06:52 |
|
EtaBetaPi posted:So I'm learning to use mySQL to ease my work with the NCBI databases. The problem is they are quite large, and my queries against them are throwing errors left and right. The workbench I have installed on this machine is a little newer than the one you're using, but I'm assuming everything is in the same place. There is a preference in WorkBench that sets the connection keep-alive to 600 seconds. Change that.
|
# ? Dec 6, 2010 12:37 |
|
Iniluki posted:The workbench I have installed on this machine is a little newer than the one you're using, but I'm assuming everything is in the same place. Done, still dies on me. Tried to set it to 0, 1, and 1000 and it keeps on crapping out at 600 seconds exactly.
|
# ? Dec 6, 2010 17:06 |
|
Could the server be running out of memory, what does the mysql error log say?
|
# ? Dec 6, 2010 20:23 |
|
I'm just getting into using the CLR to create TSQL procedures and I'm running into a problem with error handling. What I'm doing is creating a TSQL stored procedure that will call a CLR procedure (gets a response from a webpage). The TSQL stored proc will be running in a transactional space in a TRY/CATCH block and will need to catch any errors generated in the CLR proc and rollback the transaction. The problem is when I enlose the call to the CLR proc in a TRY/CATCH I get the following error: "Msg 0, Level 11, State 0, Line 0 A severe error occurred on the current command. The results, if any, should be discarded." Here is my TSQL code: code:
code:
Enclosing the offending CLR code in a TRY/CATCH block. Having no TRY/CATCH in the CLR code (see example) Built the CLR proc against both the 2.0 and 3.5 NET framework. I am testing this on a SQL 2008 Dev edition install I have on my computer (No service packs, currently downloading SP2 to see if it helps) Any ideas?
|
# ? Dec 6, 2010 22:21 |
|
EtaBetaPi posted:Done, still dies on me. Tried to set it to 0, 1, and 1000 and it keeps on crapping out at 600 seconds exactly. It's strange that it's after 600 seconds it sounds like a setting somewhere. Check inside the Mysql config file my.cnf [linux / bsd] / my.ini [windows] and see if there is this set interactive_timeout = 600 # seconds to waits for activity on interactive connection Also what happens when you run the query through the command line interface to your mysql server?
|
# ? Dec 7, 2010 11:31 |
|
Well, I've kind of fixed my issue. If the error is in the query (Select 1/0) then it blows up. If the issue is .NET code (Convert.toInt32("abc")) then it handles the error ok. Really wish there was a way to catch the query error though.
|
# ? Dec 7, 2010 20:10 |
|
Is there a way to compress a WHERE clause that looks like this (MySQL 5.1) SELECT Blah FROM Blah WHERE Blah = '1' OR Blah = '2' OR Blah = '4' into some form like SELECT Blah FROM Blah WHERE Blah = '1','2','3' edit. None of the provided operators will work, nor BETWEEN or LIKE or anything like that.
|
# ? Dec 9, 2010 21:36 |
|
i am not zach posted:Is there a way to compress a WHERE clause that looks like this WHERE Blah IN ('1', '2', '3')
|
# ? Dec 9, 2010 21:49 |
|
Hammerite posted:WHERE Blah IN ('1', '2', '3') Thank you so much!
|
# ? Dec 9, 2010 21:49 |
|
I am looking for a program like Microsoft Visio that can be used to visually mock-up data models. Something web based would be awesome.
|
# ? Dec 13, 2010 00:42 |
|
I have a MySQL performance question. I'm expanding a system that builds reports out of parameters specified by the user. Right now they are limited to the past 18 months, as we archive records by year after that. I figure I have two options for if their search involves something in the archives: 1. Union the tables based on the dates that they have in their query. So if they request data from 2008 and 2010 for some reason, I can just union the 2008 table with the current one. I'm guessing this will be a pain in the rear end to do, but I can do it. 2. Union all the tables in a view and just do a query based on that. This would be easier, and I'm not sure if it would be much slower than option 1. The tables have about 7 million records on average and go back to 2000.
|
# ? Dec 13, 2010 19:35 |
|
I need to select a record based on criteria of the record that happened directly before it (chronologically) for a particular employee. I have no idea how to do this since there isn't a relationship. Here is a pseudo data set of Events, where every event has a FK relationship to a location (not important), a status, and an employee: I'd like to select a count of all events, grouped by EmployeeId, where: 1. StatusId = 1 2. The last event by that EmployeeId was a 9. So, in the data set above, records with EventIds 3 and 5 would be counted, the rest would not. I've ordered them in a chronological order, but note that the preceding record for any event is not necessarily related to that employee. For instance, Event #7 does not count because the last event related to that employee, chronologically, (event 3), has StatusId of 1, not 9. magic_toaster fucked around with this message at 05:06 on Dec 15, 2010 |
# ? Dec 15, 2010 05:02 |
|
magic_toaster posted:
You want to use an analytic function like lag(). I'm not sure what type of db you're using, but here's how you'd do it in Oracle (which you can't be using since you have a column called 'date'...). Hopefully you can adapt it for whatever you've got. code:
Goat Bastard fucked around with this message at 06:28 on Dec 16, 2010 |
# ? Dec 15, 2010 06:19 |
|
If you're running mssql 2005 or greater, you don't get the lag analytical function, but you do get the row_number() analytical function.code:
Jethro fucked around with this message at 17:39 on Dec 15, 2010 |
# ? Dec 15, 2010 17:27 |
|
Thanks for the two replies. Here's what I've been able to come up with:code:
I should also mention that e1 is being inner joined with 4 other tables, which as it stands, cannot be avoided. There is a filter that is done on the events that requires a value from a table that is 4 tables away. We're going to look into how the events table can become less reliant.
|
# ? Dec 16, 2010 04:53 |
|
Jethro posted:If you're running mssql 2005 or greater, you don't get the lag analytical function, but you do get the row_number() analytical function. Thanks! I'll convert this over to the real data set tomorrow morning and see if it performs better than my solution above.
|
# ? Dec 16, 2010 04:58 |
|
Triggers in MySQL. I'd like to update a recent_changes table with the user, time, action and preferably number of rows affected for any DELETE or UPDATE query. I haven't worked with triggers or stored procedures but have done quite a bit with regular querying, so I'm just looking for a nice skeleton to work off of. My assumption, based on the manual, is: code:
Any help or a nice skeleton would be appreciated.
|
# ? Dec 16, 2010 18:53 |
|
|
# ? Jun 8, 2024 08:04 |
|
So my SQL is rusty as all gently caress, so do you think anyone could link me to a sample database that I could play around with and get back up to speed?
|
# ? Dec 16, 2010 21:15 |