|
Jethro posted:
|
# ? Oct 12, 2009 20:32 |
|
|
# ? Jun 3, 2024 20:58 |
|
Is there a better way to do this? tsql I have entries being made into the database something like this... code:
So my result is code:
code:
EDIT: This is dumb, can't I just do the exact same thing I'm doing in the delete but the opposite and handle it in the select? EDIT2: No. benisntfunny fucked around with this message at 23:54 on Oct 14, 2009 |
# ? Oct 14, 2009 23:32 |
|
code:
Also, customer_number is Bob?
|
# ? Oct 14, 2009 23:41 |
|
ShoulderDaemon posted:
it's an example... albiet since I gave a number as varchar you might wonder how good. MAX(DATE) didn't work for me. I also have quite a bit more columns being returned.
|
# ? Oct 14, 2009 23:43 |
|
benisntfunny posted:it's an example... albiet since I gave a number as varchar you might wonder how good. MAX(DATE) didn't work for me. I also have quite a bit more columns being returned. Max(Date) should do exactly what you want unless its not stored in the database as a date.
|
# ? Oct 14, 2009 23:57 |
|
Sprawl posted:Max(Date) should do exactly what you want unless its not stored in the database as a date. code:
|
# ? Oct 15, 2009 00:00 |
|
You probably don't want to group by the column you are trying to take the maximum of.
|
# ? Oct 15, 2009 00:05 |
|
ShoulderDaemon posted:You probably don't want to group by the column you are trying to take the maximum of. Probably shouldn't. You're right. THanks. I knew it was dumb. EDIT: well actually it didn't work on my main select, but it works enough for me to do this a cleaner way. benisntfunny fucked around with this message at 00:10 on Oct 15, 2009 |
# ? Oct 15, 2009 00:06 |
|
I think I just hit a bug with MySQL 5.0.51a. In one of my larger queries, I have a section that is WHERE user_id IN (1,2,3,4). This works fine for every combination, (3,4,5), (4), (5), but as soon as I try (1), it pulls up zero results. If I switch IN to =, it works. I'll just avoid using that user_id for anybody, but has anyone heard of this happening?
|
# ? Oct 16, 2009 15:30 |
|
I need to make this faster (mysql):code:
Right now I'm testing on a database with 50k+ rows in each table and it takes about 2 seconds for the query to run, which is unacceptable. It would be very difficult to change the column types or database structure, so don't bother telling me to unless it's the only possible way. Also, I didn't design this thing and I already know it's - lets say - not great. Let me know if I need to provide more information.
|
# ? Oct 16, 2009 17:24 |
|
Munkeymon posted:I need to make this faster (mysql): If subs and users are 1:1 you should get rid of the DISTINCT. Also, using an outer join here is inappropriate since you require rows in both to exist. If subs has users not in the users table you should normalize the users table. code:
Are subs that begin and end in the future actually possible? If not, you can take out the FromDate comparison, since you really only care if they are ending in the future. Also, you might be able to get better results by bounding your ToDate if you know that your subs can only end (say) at most 90 days from today (ToDate between '20091016' and '20091216').
|
# ? Oct 16, 2009 18:03 |
|
var1ety posted:If subs and users are 1:1 you should get rid of the DISTINCT. Also, using an outer join here is inappropriate since you require rows in both to exist. If subs has users not in the users table you should normalize the users table. quote:
The inner join is about 20% faster if I remove distinct, so that's cool and thank you, but I'm concerned that the result would be wrong for one of our databases where users and subs aren't 1:1. I'll have to check that. It's much slower with distinct, however. ~7 seconds as opposed to ~2 Subscriptions can be in the future because this is for statistics reporting and it could examine any date. We have to support any date range on a subscription that anyone might want to type in. October 31, 1910 -> November 12th, 3045? Yeah, sure why not? I think I've tried changing the columns to unsigned ints and had even worse results. I'll have to check that again, I guess. I'm pretty sure dates are out of the question because then I'd have to rewrite any query that looks at this table, right? Background: this being a 'traditional' PHP monstrosity, all database interaction is done via mysql_query with query strings constructed in-code. Munkeymon fucked around with this message at 18:31 on Oct 16, 2009 |
# ? Oct 16, 2009 18:28 |
|
Munkeymon posted:The inner join is about 20% faster if I remove distinct, so that's cool and thank you, but I'm concerned that the result would be wrong for one of our databases where users and subs aren't 1:1. I'll have to check that. It's much slower with distinct, however. ~7 seconds as opposed to ~2 Cutting out unnecessary steps is the fastest way to get a job done. You should spend your time cleaning up the outliers and (if it makes sense in your situation) policing these using constraints so they cannot recur. You could also consider staging the results into a table and refreshing daily (or intra-daily, or after inserts).
|
# ? Oct 16, 2009 18:44 |
|
var1ety posted:Cutting out unnecessary steps is the fastest way to get a job done. You should spend your time cleaning up the outliers and (if it makes sense in your situation) policing these using constraints so they cannot recur. The problem is that users:subs being 1:many is the correct behavior as the system was designed, but some databases are going to be 1:1 by chance because of the way they happen to be used, so I can't make that assumption or clean anything up.
|
# ? Oct 16, 2009 19:17 |
|
Munkeymon posted:We should do a lot of things we don't It's 1:many, but we might orphan subs rows somewhere. Well you could do some hackie things that i've done before to terrible web apps. Create new columns called <name>2 and have them date values and then just change anything that would write the data into them to also update those columns without disturbing anything working on the old ones. The inner join method will only make sure you have a record in both tables so if a users entry is deleted without wiping out its subs entry they subs won't show up in the list otherwise it will do everything the same as the right join. You do have a combined index of the 2 date's as one index right? Are these MYISAM or INNODB tables and are your index's btrees?
|
# ? Oct 16, 2009 19:29 |
|
Sprawl posted:Well you could do some hackie things that i've done before to terrible web apps. We never actually delete users or subs (except by hand if we have to for some reason). Making a combined index didn't do anything - it still won't use the index and I did try both (FromDate,ToDate) and (ToDate,FromDate). I suspect it simply can't use an index with any combination of less or greater than on a varchar column, but I can't find confirmation of this in the manual. It's all MyISAM and they're all BTREES. Edit: Making a copy of the subs table and turning the dates into ints actually slowed it down - wtf? Edit 2: Is it unreasonable of me to expect it to be able to use an index to examine a column with greater or less than? Munkeymon fucked around with this message at 20:39 on Oct 16, 2009 |
# ? Oct 16, 2009 20:07 |
|
Munkeymon posted:We never actually delete users or subs (except by hand if we have to for some reason). Making a combined index didn't do anything - it still won't use the index and I did try both (FromDate,ToDate) and (ToDate,FromDate). I suspect it simply can't use an index with any combination of less or greater than on a varchar column, but I can't find confirmation of this in the manual. Can you try date value fields and maybe an InnoDB table i find they cache these kinds of results better.
|
# ? Oct 16, 2009 21:18 |
|
Bite the bullet and convert to actual date fields. If you don't, these problems will only get worse, and at the same time it will become harder to make the switch.
|
# ? Oct 16, 2009 21:27 |
|
I'm trying to find a very conditional mysql insert command that checks for a duplicate key AND something else. Using php. Usual statement goes something like this: INSERT INTO table (a,b,c) VALUES (1,2,3) ON DUPLICATE KEY UPDATE c=4, b=5, c=6; I want something like this: INSERT INTO table (a,b,c) VALUES (1,2,3) ON DUPLICATE KEY && DUPLICATE columnName($value) UPDATE a=4, b=5, c=6; LP0 ON FIRE fucked around with this message at 22:19 on Oct 16, 2009 |
# ? Oct 16, 2009 22:16 |
|
MoNsTeR posted:Bite the bullet and convert to actual date fields. If you don't, these problems will only get worse, and at the same time it will become harder to make the switch. I tried date fields in MyISAM and it's still only about 25% faster than my base time. Thankfully it understands ISO dates so I didn't have to write anything to convert them. I'll give InnoDB a shot. Edit: InnoDB doesn't make any difference :\ Is there really no way to make it use an index on these columns? I tried INT, CHAR, VARCHAR, and DATE. Munkeymon fucked around with this message at 23:27 on Oct 16, 2009 |
# ? Oct 16, 2009 23:05 |
|
Munkeymon posted:I tried date fields in MyISAM and it's still only about 25% faster than my base time. Thankfully it understands ISO dates so I didn't have to write anything to convert them. I'll give InnoDB a shot. I have one setup that uses them on date field fine? there is a primary key set for the table right?
|
# ? Oct 17, 2009 01:05 |
|
Munkeymon posted:I tried date fields in MyISAM and it's still only about 25% faster than my base time. Thankfully it understands ISO dates so I didn't have to write anything to convert them. I'll give InnoDB a shot. Could just be your data distribution or the way you're ordering your date fields in your index. If the optimizer thinks you're going to be pulling out a ton of rows then it may think it is faster pulling the whole table. Are you doing a composite index on (fromdate, todate)? If most of your entries have a fromdate <= today then it'll probably ignore the index. If only a small percentage have todate >= today then try indexing on that column first.
|
# ? Oct 17, 2009 06:08 |
|
awdio posted:I'm trying to find a very conditional mysql insert command that checks for a duplicate key AND something else. Using php. Do you mean you want the two columns to have unique pairs? If so you can create a unique index on (column1, column2) and it should work. Not sure if I've understood your problem correctly though.
|
# ? Oct 17, 2009 11:46 |
|
Vince McMahon posted:Do you mean you want the two columns to have unique pairs? If so you can create a unique index on (column1, column2) and it should work. Not sure if I've understood your problem correctly though. Sorry I wasn't so clear. Let me explain a slightly different situation, because I think it would be even more help. I would like one statement that inserts if two certain column values do not exist. If they do exist, do an update.
|
# ? Oct 17, 2009 15:10 |
|
Sprawl posted:I have one setup that uses them on date field fine? there is a primary key set for the table right? Yeah, an autoincrementing ID column that I don't think gets used much if at all. var1ety posted:Could just be your data distribution or the way you're ordering your date fields in your index. If the optimizer thinks you're going to be pulling out a ton of rows then it may think it is faster pulling the whole table. It is possible for this dataset that it thinks pulling the whole table is faster since most of the subs are valid on this database. I think I know of a different one I can test that on. I'm testing with 4 indexes on from, to, (from,to) and (to,from). It shouldn't be a problem to have them all on the same table should it? I'm not going to confuse the optimizer or anything am I? I'd just be happy to see it try to use one.
|
# ? Oct 19, 2009 15:22 |
|
Munkeymon posted:I'm testing with 4 indexes on from, to, (from,to) and (to,from). It shouldn't be a problem to have them all on the same table should it? I'm not going to confuse the optimizer or anything am I? I'd just be happy to see it try to use one. As far as I know adding too many indexes never confuses it. It will always use at most one index, and it will (hopefully) choose the best one. Having redundant indexes just means you are taking up space.
|
# ? Oct 20, 2009 01:33 |
|
I am trying to query some XML in an XML column in a SQL2K8 DB. Here is the xmlcode:
code:
Edit: I forgot the '@' symbol on the attribute name. Doh! Fastbreak fucked around with this message at 17:03 on Oct 20, 2009 |
# ? Oct 20, 2009 16:16 |
|
I'm trying to find rows with events inbetween the given event_start_time and event_end_timecode:
code:
code:
|
# ? Oct 20, 2009 22:02 |
|
almostkorean posted:Can someone please help and tell me what the effff is wrong here?
|
# ? Oct 20, 2009 22:19 |
|
Sub Par posted:It's probably expecting the default format of 'DD-MON-YY'. Try using to_date(yourVariable,'YYYY-MM-DD HH24:MI:SS'). It is failing on my implementation (10g) because of the fractional seconds, so either remove that fraction or deal with it in the format mask in to_date(). I googled for a minute and found something about using FF for fractional seconds, but it's not working for me. Dates only have precision to the seconds position so the FF modifier does not work with to_date(). You can use it with to_timestamp(). OP - if your column is a date you should strip the fractional seconds then use to_date() and/or modify your NLS_DATE_FORMAT session variable. If your column is a timestamp you should use to_timestamp() and either set your NLS_TIMESTAMP_FORMAT, or specify it as an argument to to_timestamp(). code:
|
# ? Oct 20, 2009 22:39 |
|
var1ety posted:stuff Nice! It worked, thank you very much
|
# ? Oct 21, 2009 01:08 |
|
Please be patient because I think I'm about to ask some appallingly dumb poo poo. I'm trying to write a POS (Point of Sale, although atm Piece Of poo poo is more appropriate) application in C#/.NET using SQL Server as the database. I want to use as much ANSI SQL rather than MS stuff as I don't want to have a lot of work when I'll eventually change the db to MySQL or PostgreSQL. Everything (suppliers, brands, products, categories, invoices,...) is stored over 20-something tables in a single database. What I want to achieve is that different types of users with different rights can sign in to the application. For example, administrators have full read/write/create control, management has read/write on all tables, accounting can only create products and change tax rates... My first question: how do I work with roles/logins/users/schemas? Can I keep a single login, create schemas for every kind of employee, and then create users in a role that uses these schemas (where the users would inherit their permissions from the role they are in)? Second: on the login screen, the login/password are tested against a table "Users" in the main database with the following stored procedure: code:
code:
But the code:
So should I check logins/password under a readonly user, and, on a successful login, log that user off and log in with the credentials given by the user? Can I even specify a connection with a specific user rather than a login? Should I gtfo and go do webdesign? Please help me SQL thread
|
# ? Oct 22, 2009 19:46 |
|
well if your using MSSQL it can integrate really nicely into the AD users and permissions and they you can set it to do lots of stuff but mysql/postgresql do things differently even from eachother so you will likely for future proofing purposes have to do this all outside of the sql level.
|
# ? Oct 22, 2009 19:56 |
|
Sprawl posted:well if your using MSSQL it can integrate really nicely into the AD users and permissions and they you can set it to do lots of stuff but mysql/postgresql do things differently even from eachother so you will likely for future proofing purposes have to do this all outside of the sql level. Well the thing is, even though it's much easier, I don't want to make AD a requirement to use the POS app and make users go through the trouble of buying/installing an AD server, but ultimately include MySQL, postgres, hell even SQLite (I am young and naive). Also in the environment this is to be used, users switching would be too slow, most likely there would just be several instances of the app running under a single AD/regular user on a few computers where users share a pc.
|
# ? Oct 22, 2009 20:23 |
|
I'm sure a billion people are about to correct me, but I think the usual way to do this is to have user and role tables, rather than using the built in DB users and roles, especially if the app is web-based and/or you want to be as DB agnostic as possible. This way, all connection to the app is done using an administrative-type service account with the proper permissions. So, when someone connects, they log in to the app using their personal account information, but the connection between the app and the db uses the service account. Also, don't store plain-text passwords.
|
# ? Oct 22, 2009 21:02 |
|
Jethro posted:I'm sure a billion people are about to correct me, but I think the usual way to do this is to have user and role tables, rather than using the built in DB users and roles, especially if the app is web-based and/or you want to be as DB agnostic as possible. This way, all connection to the app is done using an administrative-type service account with the proper permissions. So, when someone connects, they log in to the app using their personal account information, but the connection between the app and the db uses the service account. Yes this is pretty much was i was saying. You will have to setup your own roles and permissions for everything you want in your program that is totally decoupled from the sql user part of things.
|
# ? Oct 22, 2009 21:38 |
|
Jethro posted:I'm sure a billion people are about to correct me, but I think the usual way to do this is to have user and role tables, rather than using the built in DB users and roles, especially if the app is web-based and/or you want to be as DB agnostic as possible. This way, all connection to the app is done using an administrative-type service account with the proper permissions. So, when someone connects, they log in to the app using their personal account information, but the connection between the app and the db uses the service account. So basically you mean disabling/enabling parts of my windows forms that a given user shouldn't have acces to when he loads a form? Or altering my data layer classes to use some sort of permissions system? I do have a "CurrentUser" statvar so this would be feasible.
|
# ? Oct 23, 2009 07:36 |
|
wigga please posted:So basically you mean disabling/enabling parts of my windows forms that a given user shouldn't have acces to when he loads a form? Or altering my data layer classes to use some sort of permissions system? I do have a "CurrentUser" statvar so this would be feasible. Yes pretty much what it looked like what you were trying to do was each user login have a sql user with a set of permissions that controlled what they could do in the application if you tied that in with their login info so its independent of your database it can do what you want.
|
# ? Oct 23, 2009 08:05 |
|
Sprawl posted:Yes pretty much what it looked like what you were trying to do was each user login have a sql user with a set of permissions that controlled what they could do in the application if you tied that in with their login info so its independent of your database it can do what you want. So what if I implement half of this and make 10 logins (9 + service login to fetch user), and make separate Connect methods in the application? I now have: code:
code:
|
# ? Oct 23, 2009 09:10 |
|
|
# ? Jun 3, 2024 20:58 |
|
I have a stored procedure that I want to run about every 5 minutes on a SQL 2005 enterprise server. I thought I could just create a schedule that would run it at that frequency, but that doesn't seem to exist- the lowest resolution is daily. So I thought I'd set it to run whenever the CPU's were idle, but I don't know how often that is, and it's important that this runs with a consistent frequency. Am I attacking this the wrong way? How should I set this up so that it runs every 5 minutes? EDIT: HOLY poo poo I'M BLIND. Forget you read this.
|
# ? Oct 23, 2009 15:07 |