Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

Scrapez posted:

Efficiency question...

I'm writing a shell script that will take a list of telephone numbers each day, it will then take that list of numbers and determine if they already exist in the master list. If they do not exist, it will add them and if they exist, it will discard them.

Currently, I'm accomplishing this with flat files using grep. What I want to do is move the master list of number to a mysql database.

My question is whether it would be better to continue to utilize the flat files to get the list of new unique numbers and then insert those numbers into the mysql database. Or, would it be better to run a mysql query of the master list to determine if the new numbers exist and insert them if not?

It seems like MySQL would have some type of function that would make it possible to deliver a list of numbers and have it insert them if they don't already exist. Does such a thing exist? I am obviously not well versed with MySQL.

Edit: I was finally able to google the right thing. It appears I can do an INSERT IGNORE which will skip any entries that already exist in the database.

If you want to update other data associated with a duplicated phone number then there is also INSERT ... ON DUPLICATE KEY UPDATE which is more complicated but allows you to specify whether and how the existing data is to be updated when a collision is found.

Adbot
ADBOT LOVES YOU

Scrapez
Feb 27, 2004

PhantomOfTheCopier posted:

That is not a job for grep.

cat oldlist newlist | sort | uniq

This will be helpful for future reference. I actually tried it out with cat oldlist newlist | sort | uniq -d and it works great. The -d flag on uniq just returns the duplicates which provides numbers that weren't already in the oldlist.

Previously, I'd used grep -fv.

PhantomOfTheCopier
Aug 13, 2008

Pikabooze!

Scrapez posted:

This will be helpful for future reference. I actually tried it out with cat oldlist newlist | sort | uniq -d and it works great. The -d flag on uniq just returns the duplicates which provides numbers that weren't already in the oldlist.

Previously, I'd used grep -fv.
If you're trying to find the new unique numbers, indeed, and I expect grep may be faster depending on... things. If you just want to maintain the full list, the output of the original command is the new phone directory. No clean way to do it in place, you need to output to a temp file and mv. (Don't pipe out to your input file, bad.)

Linux sort handles merge sort, so keep the master sorted, sort the new numbers independently first, then merge. You're basically doing what a database would do without transactions, rollback, index overhead. Make sure you have backups. You could probably use diff/patch to permit point in time rollback/rollforward. :allears:

underage at the vape shop
May 11, 2011

by Cyrano4747
Is this a good place to ask questions about ER diagrams and stuff?

Nth Doctor
Sep 7, 2010

Darkrai used Dream Eater!
It's super effective!


underage at the vape shop posted:

Is this a good place to ask questions about ER diagrams and stuff?

Probably

KernelSlanders
May 27, 2013

Rogue operating systems on occasion spread lies and rumors about me.

What about OLTP vs OLAP?

Eeyo
Aug 29, 2004

In the past I've worked with a server that hosts a MySQL database that contains some (several 10GB) of time series data. It's basically timestamped measurements of temperatures/pressures/voltages/etc. I followed the basic recommendations in the MySQL documentation for backup: I created a mysqldump and enabled the binary log. The server had a catastrophic power outage and didn't recover very well. The current maintainers have moved on to a new server, but I need the historical data on my personal computer to do some analysis.

The other problem is I can't really fit the data onto my actual PC: I have a macbook pro with a small HD so there's not enough space. So I created a tablespace where the file is located on an external drive, which seems to work out well.

When I read the data in, I use sed to make sure the tables get put into my tablespace on the external drive (sed 's/ENGINE=InnoDB/TABLESPACE mytablespace ENGINE=InnoDB/g').

This seems to work fine, but at some point in the binary log a new user is created and that corrupts my user table. Any clue why this might happen? I've installed MySQL v5.7.19 on my laptop, and the old server runs v5.5.57.

Here's the only error messages that pop up in my log:
code:
/usr/local/mysql/bin/mysqld: Column count of mysql.user is wrong.
Expected 45, found 42. The table is probably corrupted

[Warning] 'db' entry '[database_name] [username]@localhost' had
database in mixed case that has been forced to lowercase because
lower_case_table_names is set. It will not be possible to remove this
privilege using REVOKE.
(The second one sounds odd to me. I'm using a different database name than what was on the original server, could that gently caress it up?)

And here's the command adding the user. I tried reading in the log files once before, and MySQL generated an error on this line in this log file (#61 out of 400) and subsequently wouldn't let me log in. Unfortunately I've lost that error code so I can't tell you what it is, but I do know that this line caused a problem; I think previously it had complained about a password_last_changed column missing or something like that.
code:
CREATE USER '[username2]'@'localhost' IDENTIFIED BY '[password]'
Thanks for reading!

ElehemEare
May 20, 2001
I am an omnipotent penguin.

Are you running an older version of MySQL on your local than was running on the former server?

Try upgrading to the same version. The error implies you're missing columns in the mysql system database.

https://dba.stackexchange.com/questions/16397/cannot-grant-privleges-as-root/16441#16441

Eeyo
Aug 29, 2004

ElehemEare posted:

Are you running an older version of MySQL on your local than was running on the former server?

Try upgrading to the same version. The error implies you're missing columns in the mysql system database.

https://dba.stackexchange.com/questions/16397/cannot-grant-privleges-as-root/16441#16441

My installation is 5.7; the old installation that produced the binary logs is 5.5. So I'm inputting a mysql dump from 5.5 to a 5.7 server (this works fine), and then using mysqlbinlog to generate SQL statements. Those numbers make some sense, according to that 5.5 had 42 columns, which matches with my error message.

As far as I can tell the files I'm feeding it shouldn't directly affect mysql.users (unless I'm missing something obvious). Can adding a new user or granting it privileges drop columns from that table or mess it up somehow?

Edit: Sorry, I predictably didn't look far enough into the logs. Apparently at some point the DB was upgraded (from some earlier version). A user called 'debian-sys-maint' was added and then a whole lot of stuff was done to the user table among other ones. Maybe it's that that caused the problems; it's doing a bunch of stuff to migrate to the current version (5.5) that really messes up my 5.7 installation.

Edit2: Ok, maybe my real problem was that my initial mysql dump re-created the user table... When the subsequent create user command was run it must have corrupted those tables. I must have initially dumped ALL the databases on my installation instead of just the data ones I was interested in.

Eeyo fucked around with this message at 16:41 on Sep 8, 2017

Secx
Mar 1, 2003


Hippopotamus retardus
I'm debugging an update trigger problem on SQL Server 2014 and its driving me nuts.

Why aren't the the last two columns giving the same results when inside an update trigger:

code:
select @elite_c2, @old_matter_id, isnull(@elite_c2, @old_matter_id), case when @elite_c2 is null then @old_matter_id else @elite_c2 end from inserted i
The SQL result for the query above is as follows:

NULL, 3105CA1, 3105, 3105CA1

Since this is inside an update trigger, I am dumping the values in a table. The second column using the case switch gives me what I actually need (3105CA1), but I don't understand why the isnull() isn't returning the same result.

I do a set @elite_c2 = (select right(<some string>, 4) from <some table> where <foo>) earlier in the code. But in this case, that table returns nothing, so @elite_c2 should be NULL, which is confirmed when I select it by itself and dump its result.

Secx fucked around with this message at 21:26 on Sep 21, 2017

insidius
Jul 21, 2009

What a guy!
This might be more suited to an AWS based thread but it largely concerns SQL so figured I would throw it in here.

Does anyone have any significant experience with the AWS RDS products? I have a master/slave configuration currently running on dedicated hardware using MariaDB 10 and parallel replication that I am having issues with.

The machines are fairly decent, SSD, 24 Cores each, 96GB of memory etc.

The slave keeps up without any significant issue.

I have attempted to slave from the environment into a new RDS instance that no matter what I throw at it simply can not keep up. I am slaving directly off the master and it still cant keep up. Its not an IO issue, the logs are making it to the machine fine, its purely an SQL issue. The binary logs being presented are the same so the opportunity for parallel apply are the same as they are for the slave on the physical infrastructure.

The major tuning components between the two are the same, we are paying for prioritised IOPS (Of which we are not even close to hitting) and yet still nothing. I for once am just truely stumped, try as I might no matter how large of an instance type or replication settings we play with it continues to be unable to keep up despite it being an essentially identical configuration to our current setup.

I am pretty close to throwing out an EC2 instance with MariaDB for testing to rule out an RDS specific issue.

meanieface
Mar 27, 2012

During times of universal deceit, telling the truth becomes a revolutionary act.

Secx posted:

I'm debugging an update trigger problem on SQL Server 2014 and its driving me nuts.

Why aren't the the last two columns giving the same results when inside an update trigger:

code:

select @elite_c2, @old_matter_id, isnull(@elite_c2, @old_matter_id), case when @elite_c2 is null then @old_matter_id else @elite_c2 end from inserted i

The SQL result for the query above is as follows:

NULL, 3105CA1, 3105, 3105CA1

Since this is inside an update trigger, I am dumping the values in a table. The second column using the case switch gives me what I actually need (3105CA1), but I don't understand why the isnull() isn't returning the same result.

I do a set @elite_c2 = (select right(<some string>, 4) from <some table> where <foo>) earlier in the code. But in this case, that table returns nothing, so @elite_c2 should be NULL, which is confirmed when I select it by itself and dump its result.

Look at the data types of id 1 & 2.

coalesce is ansi standard and nullif can jump off a drat cliff. Nullif(var1, var2) will implicitly cast var2 as var1's datatype.

Secx
Mar 1, 2003


Hippopotamus retardus

meanieface posted:

Look at the data types of id 1 & 2.

coalesce is ansi standard and nullif can jump off a drat cliff. Nullif(var1, var2) will implicitly cast var2 as var1's datatype.

Both variables are nvarchars. They differ by length:

declare @old_matter_id nvarchar(36)
declare @elite_c2 nvarchar(4)

At this point, I got my code working by using "case when" and consider it good enough, but I'm just curious why isnull() behaves that way.

If I didn't already have a billion other things to do for this project I'd try to build a minimal standalone database with a reproducible test case and get an SQL Server SME to look at it, but :effort:

kumba
Nov 8, 2003

I posted my food for USPOL Thanksgiving!

enjoy the ride

Lipstick Apathy

Secx posted:

Both variables are nvarchars. They differ by length:

declare @old_matter_id nvarchar(36)
declare @elite_c2 nvarchar(4)

At this point, I got my code working by using "case when" and consider it good enough, but I'm just curious why isnull() behaves that way.

If I didn't already have a billion other things to do for this project I'd try to build a minimal standalone database with a reproducible test case and get an SQL Server SME to look at it, but :effort:

Right. So doing isnull(@elite_c2,@old_matter_id) implicitly casts @old_matter_id to nvarchar(4) when @elite_c2 is null, resulting in the answer of 3105.

meanieface
Mar 27, 2012

During times of universal deceit, telling the truth becomes a revolutionary act.

Secx posted:

Both variables are nvarchars. They differ by length:

declare @old_matter_id nvarchar(36)
declare @elite_c2 nvarchar(4)

At this point, I got my code working by using "case when" and consider it good enough, but I'm just curious why isnull() behaves that way.

If I didn't already have a billion other things to do for this project I'd try to build a minimal standalone database with a reproducible test case and get an SQL Server SME to look at it, but :effort:

I didn't explain it super well, implicit casts use all of the data type's definition. In T-SQL, a nvarchar or varchar or char's definition include how many characters are allowed in the variable.

You don't have to do a standalone thing to test.

New query window:

Declare @var_short NVARCHAR(4),
@var_long NVARCHAR(10) = 'asdfghjklm'

SELECT COALESCE(@var_short,@var_long) AS CoalesceIsAnsiiStandard, ISNULL(@var_short,@var_long) AS TSQLSpecific;

---

Love,
That cranky SME you're talking about.

LongSack
Jan 17, 2003

Any idea what would cause ms sql error 19 physical connection is not usable error when the app and the database are on the same machine (my work laptop, in this instance)? It seems to be random, and I can't reproduce it in testing on my (much beefier) desktop. Obviously, it's not a network error. Ideas?

Ranzear
Jul 25, 2013

Is your app recreating the database handler multiple times?

LongSack
Jan 17, 2003

Ranzear posted:

Is your app recreating the database handler multiple times?

I use a series of static C# classes, one per table to handle CRUD at the SqlCommand / SqlConnection level, and each class has its own SqlConnection, if that helps. Generally the connection is opened when the app starts and is closed before it exits. I'm using Integrated Security = true, pooling = false, MARS = true and timeout = 10, if any of that matters.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

LongSack posted:

I use a series of static C# classes, one per table to handle CRUD at the SqlCommand / SqlConnection level, and each class has its own SqlConnection, if that helps. Generally the connection is opened when the app starts and is closed before it exits. I'm using Integrated Security = true, pooling = false, MARS = true and timeout = 10, if any of that matters.

Don't do that. Best practice with SqlConnections is to create them and immediately dispose them when you're done (using sqlConn = new SqlConnection()). The database server will handle the pooling for performance.

Nolgthorn
Jan 30, 2001

The pendulum of the mind alternates between sense and nonsense
Haven't used SQL in years.

Tables `users`, `accounts`, and `addresses`. Users can make many accounts and every account may or may not have a single address. The schema I have come up with is.

code:
`accounts`
---
id
user_id
name
...etc

`addresses`
---
id
account_id
address
...etc
What I would like is that when I get accounts, the address comes with it. Would that be a left join?

code:
SELECT accounts.*, addresses.address
FROM accounts
LEFT JOIN addresses
ON addresses.account_id = accounts.id;

Kuule hain nussivan
Nov 27, 2008

Nolgthorn posted:

Haven't used SQL in years.

Tables `users`, `accounts`, and `addresses`. Users can make many accounts and every account may or may not have a single address. The schema I have come up with is.

code:
`accounts`
---
id
user_id
name
...etc

`addresses`
---
id
account_id
address
...etc
What I would like is that when I get accounts, the address comes with it. Would that be a left join?

code:
SELECT accounts.*, addresses.address
FROM accounts
LEFT JOIN addresses
ON addresses.account_id = accounts.id;
Yes, a left join will return all accounts regardless of if they have an address or not. If an account has many addresses, you'll receive several rows for the account.

Nolgthorn
Jan 30, 2001

The pendulum of the mind alternates between sense and nonsense

Kuule hain nussivan posted:

Yes, a left join will return all accounts regardless of if they have an address or not. If an account has many addresses, you'll receive several rows for the account.

I believe I can prevent the relationship from being greater than 1-1 by forcing `account_id` to be unique on the `addresses` table.

Kuule hain nussivan
Nov 27, 2008

Nolgthorn posted:

I believe I can prevent the relationship from being greater than 1-1 by forcing `account_id` to be unique on the `addresses` table.
Yeah, that'll work. Do you expect lots of accounts to have the same address? In that case it might be better to enforce uniqueness across the whole address and have a foreign key in the accounts table that references the address id.

Nolgthorn
Jan 30, 2001

The pendulum of the mind alternates between sense and nonsense
For my application addresses have a very specific and separate purpose.

I expect a lot of reads on the addresses table where what I want is the account id from it. I also expect there to be not very many addresses attached to accounts so this structure is better for me.

Strangely, after not using relational databases for some time I expected there to be a mechanic built in for `1-1` relationships. Is forcing the account_id to be unique a hack?

Kuule hain nussivan
Nov 27, 2008

Nolgthorn posted:

For my application addresses have a very specific and separate purpose.

I expect a lot of reads on the addresses table where what I want is the account id from it. I also expect there to be not very many addresses attached to accounts so this structure is better for me.

Strangely, after not using relational databases for some time I expected there to be a mechanic built in for `1-1` relationships. Is forcing the account_id to be unique a hack?
No, I can't think of any reason off the top of my head. Honestly, you could just drop the id-column completely, make the account id your primary key and be done with it.

Nolgthorn
Jan 30, 2001

The pendulum of the mind alternates between sense and nonsense
The id column on addresses auto increments, for reasons, and the id column on accounts is a uuid :) I think I'm satisfied with this solution for now. Thanks for the help!

Kuule hain nussivan
Nov 27, 2008

Nolgthorn posted:

The id column on addresses auto increments, for reasons, and the id column on accounts is a uuid :) I think I'm satisfied with this solution for now. Thanks for the help!
No worries. Your solution should work fine and any optimisations can be easily made if the need arises.

LongSack
Jan 17, 2003

NihilCredo posted:

Don't do that. Best practice with SqlConnections is to create them and immediately dispose them when you're done (using sqlConn = new SqlConnection()). The database server will handle the pooling for performance.

Got it. Thanks!

Ranzear
Jul 25, 2013

PHP here. I just pass my single database handler per request as the first arg to all my class-static functions.

Revalis Enai
Apr 21, 2003
<img src="https://fi.somethingawful.com/customtitles/title-revalis_enai.gif"><br>Wait, what's my phone number again?
Fun Shoe
I have a table with item numbers and multiple vendors associate with it by vendor ID. If I query a specific item#, it shows below:

item# vendor
001 vend1
001 vend2
001 vend3

I was able to get it to the way I need:

item# vendor1 vendor2 vendor3 vendor4
001 vend1 vend2 vend3 null

Here's what I have right now:

code:
SELECT ITM_NUM,Vendor1,Vendor2,Vendor3,Vendor4
FROM
(SELECT *
     'Vendor' + CAST(ROW_NUMBER() OVER(ORDER BY Cost.F01 ASC) AS VARCHAR(20)) AS RowN
	,Cost.F01 AS ITM_NUM
	,VENDOR.F334 AS SellVendor
	,F26 AS VendorCode
FROM COST_TABLE AS COST_TABLE
FULL JOIN
POS_TABLE AS POS_TABLE
ON Cost.F01 = POS.F01
FULL JOIN VENDOR_TABLE AS VENDOR
ON Cost.VID = Vendor.VID
WHERE (CURRENT = '0'
OR CURRENT = '1')
AND Cost.F01 = '5555'
GROUP BY Cost.F01,Vendor.F334,F26) As Result1
PIVOT
(
MAX(SellVendor)
FOR RowN IN (Vendor1,Vendor2,Vendor3,Vendor4)
) AS pivotTable 
But I need to be able to do this for all items.
If I don't specify an item number, the query results in showing me all items but all the vendors are null.
Will I need to process this in a loop for each item number into a new table? Or is there a way to query it?

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

You want the ROW_NUMBER(PARTITION BY x) function. That will give you each vendor's number for that particular item.

Then you can select your columns from the results, and aggregate by picking only the right entries/

code:
with vendors as (
  select
    itemNr, 
    vendor,
    row_number() over (partition by itemNr order by vendor) as vendorNr
  from item_vendors
)
select 
  itemNr,
  max(case when vendorNr = 1 then vendor end) as vendor1
  max(case when vendorNr = 2 then vendor end) as vendor2
  max(case when vendorNr = 3 then vendor end) as vendor3
  max(case when vendorNr = 4 then vendor end) as vendor4
from vendors
group by itemNr

NihilCredo fucked around with this message at 17:49 on Oct 1, 2017

Revalis Enai
Apr 21, 2003
<img src="https://fi.somethingawful.com/customtitles/title-revalis_enai.gif"><br>Wait, what's my phone number again?
Fun Shoe

NihilCredo posted:

You want the ROW_NUMBER(PARTITION BY x) function. That will give you each vendor's number for that particular item.

Then you can select your columns from the results, and aggregate by picking only the right entries/

code:
with vendors as (
  select
    itemNr, 
    vendor,
    row_number() over (partition by itemNr order by vendor) as vendorNr
  from item_vendors
)
select 
  itemNr,
  max(case when vendorNr = 1 then vendor end) as vendor1
  max(case when vendorNr = 2 then vendor end) as vendor2
  max(case when vendorNr = 3 then vendor end) as vendor3
  max(case when vendorNr = 4 then vendor end) as vendor4
from vendors
group by itemNr

I changed the line
code:
'Vendor' + CAST(ROW_NUMBER() OVER(ORDER BY Cost.F01 ASC) AS VARCHAR(20)) AS RowN
to
code:
'Vendor' + CAST(ROW_NUMBER() OVER(PARTITION BY Cost.F01 ORDER BY Vendor.F334 ASC) As Varchar(20)) As Rown
And got exactly what I needed! I'm going to read up more on how the (partition by) works. Much appreciate the help!

LongSack
Jan 17, 2003

Not sure whether to post this here or in the .NET thread, but I'll start here and see where it takes me.

I have a program I use to teach myself stuff. It is a fairly complex piece of code involving many aspects of c# and the .net framework, and as I learn more I rewrite the program from scratch. It'll never be a commercial product, and probably never be finished at all. I'm currently on my 4th rewrite, this time moving from ADO database access layer (at the SqlConnection / SqlCommand level) to an ORM (nHydrate over EF). Basically, it is a "portfolio manager" for authors to keep track of their characters - my brother is a published author, so when I decided to learn c# this seemed like a good idea for a program to learn with.

One of the things you can do is link characters to each other based on a (user-defined) relationship. I want to give the program the ability to optionally create a reciprocal relationship. There is a relationship table (aunt, father, grandson, etc.) and a gender table (my brother writes science fiction, so I won't assume a binary gender).

The reciprocal table contains a relationship id, a gender id, and a reciprocal id. The relationship and reciprocal ids both are foreign keys into the relationship table, so that a row might look like this:

<Aunt>,<Female>,<Niece>

Thus, if Mary is Sally's aunt, and Sally is female, the Sally is Mary's niece.

This worked great when I was using ADO, but when using nHydrate and EF model first, the reciprocal table keeps getting generated with the id fields being set to unique even when I tell the ORM not to make them unique. This doesn't work, because now I can't add a reciprocal for aunt and male.

So, after all that, my question: is there a better way to design the tables to support what I want to do?

Related question: while the program keeps track of a lot of data about the characters, I can't think of everything, to the user can create custom properties. There is a custom property table that just contains an id and a description. The nHydrate way to handle many-to-many relationships is to create an association table, but it doesn't let me add fields to that table. In ADO world, the association table has the character id, custom property id and the value of the custom property.

I feel like I may be running afoul of some sort of normalization rule, but I am not really educated on "good" database design. I know how to do what I need to do with sql statements, but something tells me that if I am struggling this hard trying to do something with my ORM and EF, perhaps the problem is me :ohdear:

Nolgthorn
Jan 30, 2001

The pendulum of the mind alternates between sense and nonsense
It feels like the gender and the reciprocal relationship are separate things. Also the gender is just one attribute of an unlimited number of possible attributes which could be assigned to a character.

character
----
id: uuid

character_attr
----
id: uuid
character_id: uuid
name: string (eg. "Gender")
value: string (eg. "Female")

character_rel
----
id: uuid
character1_id: uuid
character2_id: uuid
value: string (eg. "Aunt")

That way you can have as many relationships between characters of any type that you want. As well as many character attributes as you want as well. This is starting to get opinionated but that's how I'd do it. If there were a common set of attributes or relationships, I'd keep them in a separate table so they could be "selected" easily in the ui.

common_character_attrs
----
id: uuid
name: string
value: string

common_character_rels
----
id: uuid
value: string

Or, I'd do a fancy select statement to grab the most common character_attrs and character_rels without adding the extra tables. But now I'm getting off track and I don't know if I am answering your question properly. Was I way off?

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum
I guess this thread's broken, too.

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum
Another ghost post

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum
ghost post

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum
ghost post

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum
ghost post

Adbot
ADBOT LOVES YOU

Nolgthorn
Jan 30, 2001

The pendulum of the mind alternates between sense and nonsense
It feels like the gender and the reciprocal relationship are separate things. The gender is just one attribute of an unlimited number of possible attributes which could be assigned to a character.

So I'd do it something more similar to this.

code:
character
----
id: uuid

character_attr
----
id: uuid
character_id: uuid
name: string (eg. "Gender")
value: string (eg. "Female")

character_rel
----
id: uuid
character1_id: uuid
character2_id: uuid
value: string (eg. "Aunt")
That way I can have as many relationships between characters of any type that I want. As well as many character attributes as I want as well. This is starting to get opinionated but that's how I'd do it. If there were a common set of attributes or relationships, I'd keep them in a separate table so they could be perhaps "selected" easily in the ui.

code:
common_character_attrs
----
id: uuid
name: string
value: string

common_character_rels
----
id: uuid
value: string
Or, I'd do a fancy select statement to grab the most common character_attrs and character_rels without adding the extra tables. But now I'm getting off track and I don't know if I am answering your question properly. Was I way off?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply