|
Scrapez posted:Efficiency question... If you want to update other data associated with a duplicated phone number then there is also INSERT ... ON DUPLICATE KEY UPDATE which is more complicated but allows you to specify whether and how the existing data is to be updated when a collision is found.
|
# ? Aug 24, 2017 09:25 |
|
|
# ? Jun 8, 2024 19:53 |
|
PhantomOfTheCopier posted:That is not a job for grep. This will be helpful for future reference. I actually tried it out with cat oldlist newlist | sort | uniq -d and it works great. The -d flag on uniq just returns the duplicates which provides numbers that weren't already in the oldlist. Previously, I'd used grep -fv.
|
# ? Aug 28, 2017 22:32 |
|
Scrapez posted:This will be helpful for future reference. I actually tried it out with cat oldlist newlist | sort | uniq -d and it works great. The -d flag on uniq just returns the duplicates which provides numbers that weren't already in the oldlist. Linux sort handles merge sort, so keep the master sorted, sort the new numbers independently first, then merge. You're basically doing what a database would do without transactions, rollback, index overhead. Make sure you have backups. You could probably use diff/patch to permit point in time rollback/rollforward.
|
# ? Aug 29, 2017 01:15 |
|
Is this a good place to ask questions about ER diagrams and stuff?
|
# ? Aug 30, 2017 06:58 |
|
underage at the vape shop posted:Is this a good place to ask questions about ER diagrams and stuff? Probably
|
# ? Aug 30, 2017 16:54 |
|
Nth Doctor posted:Probably What about OLTP vs OLAP?
|
# ? Sep 4, 2017 14:42 |
|
In the past I've worked with a server that hosts a MySQL database that contains some (several 10GB) of time series data. It's basically timestamped measurements of temperatures/pressures/voltages/etc. I followed the basic recommendations in the MySQL documentation for backup: I created a mysqldump and enabled the binary log. The server had a catastrophic power outage and didn't recover very well. The current maintainers have moved on to a new server, but I need the historical data on my personal computer to do some analysis. The other problem is I can't really fit the data onto my actual PC: I have a macbook pro with a small HD so there's not enough space. So I created a tablespace where the file is located on an external drive, which seems to work out well. When I read the data in, I use sed to make sure the tables get put into my tablespace on the external drive (sed 's/ENGINE=InnoDB/TABLESPACE mytablespace ENGINE=InnoDB/g'). This seems to work fine, but at some point in the binary log a new user is created and that corrupts my user table. Any clue why this might happen? I've installed MySQL v5.7.19 on my laptop, and the old server runs v5.5.57. Here's the only error messages that pop up in my log: code:
And here's the command adding the user. I tried reading in the log files once before, and MySQL generated an error on this line in this log file (#61 out of 400) and subsequently wouldn't let me log in. Unfortunately I've lost that error code so I can't tell you what it is, but I do know that this line caused a problem; I think previously it had complained about a password_last_changed column missing or something like that. code:
|
# ? Sep 7, 2017 15:52 |
|
Are you running an older version of MySQL on your local than was running on the former server? Try upgrading to the same version. The error implies you're missing columns in the mysql system database. https://dba.stackexchange.com/questions/16397/cannot-grant-privleges-as-root/16441#16441
|
# ? Sep 7, 2017 17:22 |
|
ElehemEare posted:Are you running an older version of MySQL on your local than was running on the former server? My installation is 5.7; the old installation that produced the binary logs is 5.5. So I'm inputting a mysql dump from 5.5 to a 5.7 server (this works fine), and then using mysqlbinlog to generate SQL statements. Those numbers make some sense, according to that 5.5 had 42 columns, which matches with my error message. As far as I can tell the files I'm feeding it shouldn't directly affect mysql.users (unless I'm missing something obvious). Can adding a new user or granting it privileges drop columns from that table or mess it up somehow? Edit: Sorry, I predictably didn't look far enough into the logs. Apparently at some point the DB was upgraded (from some earlier version). A user called 'debian-sys-maint' was added and then a whole lot of stuff was done to the user table among other ones. Maybe it's that that caused the problems; it's doing a bunch of stuff to migrate to the current version (5.5) that really messes up my 5.7 installation. Edit2: Ok, maybe my real problem was that my initial mysql dump re-created the user table... When the subsequent create user command was run it must have corrupted those tables. I must have initially dumped ALL the databases on my installation instead of just the data ones I was interested in. Eeyo fucked around with this message at 16:41 on Sep 8, 2017 |
# ? Sep 7, 2017 19:47 |
|
I'm debugging an update trigger problem on SQL Server 2014 and its driving me nuts. Why aren't the the last two columns giving the same results when inside an update trigger: code:
NULL, 3105CA1, 3105, 3105CA1 Since this is inside an update trigger, I am dumping the values in a table. The second column using the case switch gives me what I actually need (3105CA1), but I don't understand why the isnull() isn't returning the same result. I do a set @elite_c2 = (select right(<some string>, 4) from <some table> where <foo>) earlier in the code. But in this case, that table returns nothing, so @elite_c2 should be NULL, which is confirmed when I select it by itself and dump its result. Secx fucked around with this message at 21:26 on Sep 21, 2017 |
# ? Sep 21, 2017 21:15 |
|
This might be more suited to an AWS based thread but it largely concerns SQL so figured I would throw it in here. Does anyone have any significant experience with the AWS RDS products? I have a master/slave configuration currently running on dedicated hardware using MariaDB 10 and parallel replication that I am having issues with. The machines are fairly decent, SSD, 24 Cores each, 96GB of memory etc. The slave keeps up without any significant issue. I have attempted to slave from the environment into a new RDS instance that no matter what I throw at it simply can not keep up. I am slaving directly off the master and it still cant keep up. Its not an IO issue, the logs are making it to the machine fine, its purely an SQL issue. The binary logs being presented are the same so the opportunity for parallel apply are the same as they are for the slave on the physical infrastructure. The major tuning components between the two are the same, we are paying for prioritised IOPS (Of which we are not even close to hitting) and yet still nothing. I for once am just truely stumped, try as I might no matter how large of an instance type or replication settings we play with it continues to be unable to keep up despite it being an essentially identical configuration to our current setup. I am pretty close to throwing out an EC2 instance with MariaDB for testing to rule out an RDS specific issue.
|
# ? Sep 25, 2017 00:31 |
|
Secx posted:I'm debugging an update trigger problem on SQL Server 2014 and its driving me nuts. Look at the data types of id 1 & 2. coalesce is ansi standard and nullif can jump off a drat cliff. Nullif(var1, var2) will implicitly cast var2 as var1's datatype.
|
# ? Sep 25, 2017 05:19 |
|
meanieface posted:Look at the data types of id 1 & 2. Both variables are nvarchars. They differ by length: declare @old_matter_id nvarchar(36) declare @elite_c2 nvarchar(4) At this point, I got my code working by using "case when" and consider it good enough, but I'm just curious why isnull() behaves that way. If I didn't already have a billion other things to do for this project I'd try to build a minimal standalone database with a reproducible test case and get an SQL Server SME to look at it, but
|
# ? Sep 26, 2017 19:59 |
|
Secx posted:Both variables are nvarchars. They differ by length: Right. So doing isnull(@elite_c2,@old_matter_id) implicitly casts @old_matter_id to nvarchar(4) when @elite_c2 is null, resulting in the answer of 3105.
|
# ? Sep 26, 2017 20:22 |
|
Secx posted:Both variables are nvarchars. They differ by length: I didn't explain it super well, implicit casts use all of the data type's definition. In T-SQL, a nvarchar or varchar or char's definition include how many characters are allowed in the variable. You don't have to do a standalone thing to test. New query window: Declare @var_short NVARCHAR(4), @var_long NVARCHAR(10) = 'asdfghjklm' SELECT COALESCE(@var_short,@var_long) AS CoalesceIsAnsiiStandard, ISNULL(@var_short,@var_long) AS TSQLSpecific; --- Love, That cranky SME you're talking about.
|
# ? Sep 27, 2017 14:40 |
|
Any idea what would cause ms sql error 19 physical connection is not usable error when the app and the database are on the same machine (my work laptop, in this instance)? It seems to be random, and I can't reproduce it in testing on my (much beefier) desktop. Obviously, it's not a network error. Ideas?
|
# ? Sep 30, 2017 05:45 |
|
Is your app recreating the database handler multiple times?
|
# ? Sep 30, 2017 07:24 |
|
Ranzear posted:Is your app recreating the database handler multiple times? I use a series of static C# classes, one per table to handle CRUD at the SqlCommand / SqlConnection level, and each class has its own SqlConnection, if that helps. Generally the connection is opened when the app starts and is closed before it exits. I'm using Integrated Security = true, pooling = false, MARS = true and timeout = 10, if any of that matters.
|
# ? Sep 30, 2017 14:36 |
|
LongSack posted:I use a series of static C# classes, one per table to handle CRUD at the SqlCommand / SqlConnection level, and each class has its own SqlConnection, if that helps. Generally the connection is opened when the app starts and is closed before it exits. I'm using Integrated Security = true, pooling = false, MARS = true and timeout = 10, if any of that matters. Don't do that. Best practice with SqlConnections is to create them and immediately dispose them when you're done (using sqlConn = new SqlConnection()). The database server will handle the pooling for performance.
|
# ? Sep 30, 2017 14:40 |
|
Haven't used SQL in years. Tables `users`, `accounts`, and `addresses`. Users can make many accounts and every account may or may not have a single address. The schema I have come up with is. code:
code:
|
# ? Sep 30, 2017 16:31 |
|
Nolgthorn posted:Haven't used SQL in years.
|
# ? Sep 30, 2017 18:08 |
|
Kuule hain nussivan posted:Yes, a left join will return all accounts regardless of if they have an address or not. If an account has many addresses, you'll receive several rows for the account. I believe I can prevent the relationship from being greater than 1-1 by forcing `account_id` to be unique on the `addresses` table.
|
# ? Sep 30, 2017 18:11 |
|
Nolgthorn posted:I believe I can prevent the relationship from being greater than 1-1 by forcing `account_id` to be unique on the `addresses` table.
|
# ? Sep 30, 2017 18:20 |
|
For my application addresses have a very specific and separate purpose. I expect a lot of reads on the addresses table where what I want is the account id from it. I also expect there to be not very many addresses attached to accounts so this structure is better for me. Strangely, after not using relational databases for some time I expected there to be a mechanic built in for `1-1` relationships. Is forcing the account_id to be unique a hack?
|
# ? Sep 30, 2017 18:47 |
|
Nolgthorn posted:For my application addresses have a very specific and separate purpose.
|
# ? Sep 30, 2017 18:58 |
|
The id column on addresses auto increments, for reasons, and the id column on accounts is a uuid I think I'm satisfied with this solution for now. Thanks for the help!
|
# ? Sep 30, 2017 19:03 |
|
Nolgthorn posted:The id column on addresses auto increments, for reasons, and the id column on accounts is a uuid I think I'm satisfied with this solution for now. Thanks for the help!
|
# ? Sep 30, 2017 19:07 |
|
NihilCredo posted:Don't do that. Best practice with SqlConnections is to create them and immediately dispose them when you're done (using sqlConn = new SqlConnection()). The database server will handle the pooling for performance. Got it. Thanks!
|
# ? Sep 30, 2017 21:08 |
|
PHP here. I just pass my single database handler per request as the first arg to all my class-static functions.
|
# ? Oct 1, 2017 00:14 |
|
I have a table with item numbers and multiple vendors associate with it by vendor ID. If I query a specific item#, it shows below: item# vendor 001 vend1 001 vend2 001 vend3 I was able to get it to the way I need: item# vendor1 vendor2 vendor3 vendor4 001 vend1 vend2 vend3 null Here's what I have right now: code:
If I don't specify an item number, the query results in showing me all items but all the vendors are null. Will I need to process this in a loop for each item number into a new table? Or is there a way to query it?
|
# ? Oct 1, 2017 16:46 |
|
You want the ROW_NUMBER(PARTITION BY x) function. That will give you each vendor's number for that particular item. Then you can select your columns from the results, and aggregate by picking only the right entries/ code:
NihilCredo fucked around with this message at 17:49 on Oct 1, 2017 |
# ? Oct 1, 2017 17:46 |
|
NihilCredo posted:You want the ROW_NUMBER(PARTITION BY x) function. That will give you each vendor's number for that particular item. I changed the line code:
code:
|
# ? Oct 1, 2017 18:05 |
|
Not sure whether to post this here or in the .NET thread, but I'll start here and see where it takes me. I have a program I use to teach myself stuff. It is a fairly complex piece of code involving many aspects of c# and the .net framework, and as I learn more I rewrite the program from scratch. It'll never be a commercial product, and probably never be finished at all. I'm currently on my 4th rewrite, this time moving from ADO database access layer (at the SqlConnection / SqlCommand level) to an ORM (nHydrate over EF). Basically, it is a "portfolio manager" for authors to keep track of their characters - my brother is a published author, so when I decided to learn c# this seemed like a good idea for a program to learn with. One of the things you can do is link characters to each other based on a (user-defined) relationship. I want to give the program the ability to optionally create a reciprocal relationship. There is a relationship table (aunt, father, grandson, etc.) and a gender table (my brother writes science fiction, so I won't assume a binary gender). The reciprocal table contains a relationship id, a gender id, and a reciprocal id. The relationship and reciprocal ids both are foreign keys into the relationship table, so that a row might look like this: <Aunt>,<Female>,<Niece> Thus, if Mary is Sally's aunt, and Sally is female, the Sally is Mary's niece. This worked great when I was using ADO, but when using nHydrate and EF model first, the reciprocal table keeps getting generated with the id fields being set to unique even when I tell the ORM not to make them unique. This doesn't work, because now I can't add a reciprocal for aunt and male. So, after all that, my question: is there a better way to design the tables to support what I want to do? Related question: while the program keeps track of a lot of data about the characters, I can't think of everything, to the user can create custom properties. There is a custom property table that just contains an id and a description. The nHydrate way to handle many-to-many relationships is to create an association table, but it doesn't let me add fields to that table. In ADO world, the association table has the character id, custom property id and the value of the custom property. I feel like I may be running afoul of some sort of normalization rule, but I am not really educated on "good" database design. I know how to do what I need to do with sql statements, but something tells me that if I am struggling this hard trying to do something with my ORM and EF, perhaps the problem is me
|
# ? Oct 13, 2017 01:58 |
|
It feels like the gender and the reciprocal relationship are separate things. Also the gender is just one attribute of an unlimited number of possible attributes which could be assigned to a character. character ---- id: uuid character_attr ---- id: uuid character_id: uuid name: string (eg. "Gender") value: string (eg. "Female") character_rel ---- id: uuid character1_id: uuid character2_id: uuid value: string (eg. "Aunt") That way you can have as many relationships between characters of any type that you want. As well as many character attributes as you want as well. This is starting to get opinionated but that's how I'd do it. If there were a common set of attributes or relationships, I'd keep them in a separate table so they could be "selected" easily in the ui. common_character_attrs ---- id: uuid name: string value: string common_character_rels ---- id: uuid value: string Or, I'd do a fancy select statement to grab the most common character_attrs and character_rels without adding the extra tables. But now I'm getting off track and I don't know if I am answering your question properly. Was I way off?
|
# ? Oct 15, 2017 18:29 |
|
I guess this thread's broken, too.
|
# ? Oct 15, 2017 18:31 |
|
Another ghost post
|
# ? Oct 15, 2017 18:32 |
|
ghost post
|
# ? Oct 15, 2017 18:32 |
|
ghost post
|
# ? Oct 15, 2017 18:32 |
|
ghost post
|
# ? Oct 15, 2017 18:32 |
|
|
# ? Jun 8, 2024 19:53 |
|
It feels like the gender and the reciprocal relationship are separate things. The gender is just one attribute of an unlimited number of possible attributes which could be assigned to a character. So I'd do it something more similar to this. code:
code:
|
# ? Oct 15, 2017 18:33 |