Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Jabor posted:

The use case for uuids is "I want to generate records in many disparate servers at the same time, and not have to deal with collisions when we eventually reconcile them".

If you're not running a wildly distributed setup, and are perhaps doing something a bit more traditional with a beefy server plus some read replicas, then don't bother.

Note that you can replace "many disparate servers" with "many disparate clients" and the same principle holds, without any need for your application to be anything more than a boring client-server monolith.

It's very nice when your mobile fart app can just locally generate a globally unique FartID and then keep POSTing your fart until success without any concerns about accidental double farts or whatever.

Adbot
ADBOT LOVES YOU

abelwingnut
Dec 23, 2002


there are people who don’t want surrogate keys? …why would one hate surrogate keys?

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe
There are other reasons to favour GUIDs as primary keys aside from distributed databases.

https://en.wikipedia.org/wiki/German_tank_problem

also: in some circumstances, the fact that primary key values in different tables cannot collide might provide greater confidence that you are not introducing bugs like joins to the wrong table.

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

redleader posted:

the only people who take offense at the idea of surrogate keys seem to be those who have crawled extremely up their own rear end using relational algebra

Everyone else is talking about surrogate keys as the alternative to externally-meaningful keys, but I suspect you are talking about surrogate keys as the alternative to composite keys, which is a different dichotomy.

redleader
Aug 18, 2005

Engage according to operational parameters

Hammerite posted:

Everyone else is talking about surrogate keys as the alternative to externally-meaningful keys, but I suspect you are talking about surrogate keys as the alternative to composite keys, which is a different dichotomy.

nope, applies to both cases

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe
Well, what does the choice of real-world vs. surrogate keys have to do with relational algebra and one's view of it? The meaning of a key (or absence of meaning) is surely irrelevant to relational algebra.

PhantomOfTheCopier
Aug 13, 2008

Pikabooze!
I'm happy to see there are finally some names given to some of the approaches I've advocated for instead of UUIDs in the past, and it seems that some databases have added functions to flop the high/low timestamps of UUIDs to reduce the btree insert performance issues. Maybe everyone will switch to ULIDs as the new hotness since there are libraries everywhere.

Here's the best summary I stumbled across, a good blend of basic performance and summary of alternatives: https://brandur.org/nanoglyphs/026-ids

abelwingnut
Dec 23, 2002


gonna need some guidance on a stupid report i have to build. basically, i have to de-normalize? the data i have.

so the data is of the form:

code:
idTxn		buyer		idItem		item		cost

00001		james		372801		book		10
00001		james		559384		ipad		200
00001		james		211665		case		20
this is in the cte i have. it's a byproduct of a bunch of other code simplified.

however, my coworker needs me to convert that into:

code:
idTxn		buyer		idItem1		item1		cost1		idItem2		item2		cost2		idItem3		item3	cost3

00001		james		372801		book		10		559384		ipad		200		211665		case	20
so yea, i need to make it very un-database-like. i'm really not sure how to do this. a pivot? i do know i don't need to dynamically generate the column names--those are set. and they only want the first five, so i do not need to create as many columns for the items as there are items.

any help would be much appreciated.

abelwingnut fucked around with this message at 19:11 on Oct 14, 2021

nielsm
Jun 1, 2009



I can't write the SQL off the top of my head, but you can probably write a CTE that ranks the items for each transaction with ordinal 1, 2, 3, 4, 5,... and then select a five-way self join on the CTE to construct rows with up to five ranked items from each.

Condimentalist
Jun 13, 2007

abelwingnut posted:

gonna need some guidance on a stupid report i have to build. basically, i have to de-normalize? the data i have.

so the data is of the form:

code:
idTxn		buyer		idItem		item		cost

00001		james		372801		book		10
00001		james		559384		ipad		200
00001		james		211665		case		20
this is in the cte i have. it's a byproduct of a bunch of other code simplified.

however, my coworker needs me to convert that into:

code:
idTxn		buyer		idItem1		item1		cost1		idItem2		item2		cost2		idItem3		item3	cost3

00001		james		372801		book		10		559384		ipad		200		211665		case	20
so yea, i need to make it very un-database-like. i'm really not sure how to do this. a pivot? i do know i don't need to dynamically generate the column names--those are set. and they only want the first five, so i do not need to create as many columns for the items as there are items.

any help would be much appreciated.


What database vendor and version are you using? I actually posted some pivot code for a previous post recently. The numbered CTE is a good option if you have a set amount of columns, otherwise you'll probably need a dynamic approach.

abelwingnut
Dec 23, 2002


i’m using ms sql server.

Condimentalist
Jun 13, 2007

abelwingnut posted:

i’m using ms sql server.

This should work -- CTE method like nielsm suggested


code:
with a(idTxn,Buyer,idItem,item,cost,rn) as (
select idTxn,
Buyer,idItem,item,cost,row_number() over (partition by idTxn order by cost desc) as rn 
from t
) 
select distinct a.idtxn,a.buyer,
a1.idItem as idItem1, a1.item as item1, a1.cost as item1Cost,
a2.idItem as idItem2, a2.item as item2, a2.cost as item2Cost,
a3.idItem as idItem3, a3.item as item3, a3.cost as item3Cost

from a
inner join a a1 on a.idTxn=a1.idTxn and a1.rn = 1
inner join a a2 on a.idTxn=a2.idTxn and a2.rn = 2
inner join a a3 on a.idTxn=a3.idTxn and a3.rn = 3



idtxn	buyer	idItem1	item1	item1Cost	idItem2	item2	item2Cost	idItem3	item3	item3Cost
1		james	559384	ipad	200			211665	case	20			372801	book	10

https://dbfiddle.uk/?rdbms=sqlserver_2019&fiddle=0a5ab7edbb4f789396c4dfe9e86f9440

abelwingnut
Dec 23, 2002


thanks! i'll check it out next week.

abelwingnut
Dec 23, 2002


got it working! thank you!

Knot My President!
Jan 10, 2005

Serious Halloween / Spooky Crap > The Cavern of COBOL > The SQL is Always Just a Cache Grab

kiwid
Sep 30, 2013

I'm building a view only dashboard that has related meta data on top of our main production database. I've created a separate database for the application with read/write, and I'm pulling in data from our production database using a view and a read only connection. I was considering adding foreign keys in my app's database relating the meta data to the data in the view but I started wondering if this would affect the production database in any way. Like it wouldn't prevent a delete or something on the production database would it?

Comb Your Beard
Sep 28, 2007

Chillin' like a villian.
I'm new to MySQL did years and years of SQL Server.

I want to turn off safe update mode on my dev db permanently on the stored connection level. Can I do that? Looking like I can't. Talking Workbench. Seems annoying.

https://stackoverflow.com/questions/11448068/mysql-error-code-1175-during-update-in-mysql-workbench

Bank
Feb 20, 2004
I'm having a hell of a time trying to figure this out..if anyone has some pointers I'd be thrilled..

Let's say I have a table:
code:
Customer 1, Timestamp, EventStartName
Customer 1, Timestamp, RandomEvent
Customer 1, Timestamp, RandomEvent
Customer 1, Timestamp, EventEndName
I'm trying to pull the times when the event started and ended. So the output would be:

code:
Customer 1, 7 days
I'm having trouble comparing the timestamps for two different entries..columns are straightforward for me, but this is a whole new ballgame. If not I can play around with it or ask our BSA next week..

Edit: Welp, I think I figured it out! I did an INNER JOIN on the same table then it generated two timestamp columns for me which I can use to get the diff. Huzzah!

Bank fucked around with this message at 23:01 on Nov 3, 2021

Nth Doctor
Sep 7, 2010

Darkrai used Dream Eater!
It's super effective!


Bank posted:

I'm having a hell of a time trying to figure this out..if anyone has some pointers I'd be thrilled..

Let's say I have a table:
code:
Customer 1, Timestamp, EventStartName
Customer 1, Timestamp, RandomEvent
Customer 1, Timestamp, RandomEvent
Customer 1, Timestamp, EventEndName
I'm trying to pull the times when the event started and ended. So the output would be:

code:
Customer 1, 7 days
I'm having trouble comparing the timestamps for two different entries..columns are straightforward for me, but this is a whole new ballgame. If not I can play around with it or ask our BSA next week..

Edit: Welp, I think I figured it out! I did an INNER JOIN on the same table then it generated two timestamp columns for me which I can use to get the diff. Huzzah!

So just to put it out there:
EventStart and EventEnd are probably better suited to being two columns on a table representing the entire event rather than separate rows.

Customer No, Event Key, EventStartTime, EventEndTime (nullable)

Child records could be accessed by having the parent event key as a column on the child record's row

Bank
Feb 20, 2004

Nth Doctor posted:

So just to put it out there:
EventStart and EventEnd are probably better suited to being two columns on a table representing the entire event rather than separate rows.

Customer No, Event Key, EventStartTime, EventEndTime (nullable)

Child records could be accessed by having the parent event key as a column on the child record's row

I probably didn't explain it clearly, but the problem is each event is a bit flip. So for example:

code:
Customer 1, Application Started, [timestamp]
Customer 1, Application Submitted, [timestamp]
Customer 1, Application Reviewed, [timestamp]
Customer 1, Application Approved, [timestamp]
In this case even if we had an event start and event end duration for each action, I'd still need to compare timestamps from different events.

Nth Doctor
Sep 7, 2010

Darkrai used Dream Eater!
It's super effective!


Bank posted:

I probably didn't explain it clearly, but the problem is each event is a bit flip. So for example:

code:
Customer 1, Application Started, [timestamp]
Customer 1, Application Submitted, [timestamp]
Customer 1, Application Reviewed, [timestamp]
Customer 1, Application Approved, [timestamp]
In this case even if we had an event start and event end duration for each action, I'd still need to compare timestamps from different events.

Okay that helps give context. It still would simplify your queries if you at least had some sort of Application ID you could set to group all of those records that would be different from Customer 1 starting another application some other time. Feel free to ignore me

Bank
Feb 20, 2004
No worries..in this case a customer should only go through the full process once.

I didn't design this table, but probably would have done it differently if given the chance. I'm not a BSA/DBA by any measure though, I'm just trying to fix this stupid dashboard because someone decided to spend the past 3 months making excuses while waiting for their contract to end.

Condimentalist
Jun 13, 2007

Bank posted:

No worries..in this case a customer should only go through the full process once.

I didn't design this table, but probably would have done it differently if given the chance. I'm not a BSA/DBA by any measure though, I'm just trying to fix this stupid dashboard because someone decided to spend the past 3 months making excuses while waiting for their contract to end.

I can write the SQL better when I'm not on my phone, but one way to solve this problem is by joining to the table N times where N is the number of different steps..

So for example,
select ...
From table as t
Inner join t as tp on t.customer=tp.customer

Where t.type=submitted
And tp.type =processed
Etc.

This will get you one row per customer where you have a column for submitted date times and processed date times,etc. From here it's easy to calculate the difference.

Basically you are pivoting the data from rows to columns.

Bank
Feb 20, 2004

Condimentalist posted:

I can write the SQL better when I'm not on my phone, but one way to solve this problem is by joining to the table N times where N is the number of different steps..

So for example,
select ...
From table as t
Inner join t as tp on t.customer=tp.customer

Where t.type=submitted
And tp.type =processed
Etc.

This will get you one row per customer where you have a column for submitted date times and processed date times,etc. From here it's easy to calculate the difference.

Basically you are pivoting the data from rows to columns.

Yup, that's basically what I did (see the edit I had). Inner join on itself and I basically had access to twice the number of columns. It felt really hacky, but I guess other than re-designing the table this is probably the easiest way to do it.

raminasi
Jan 25, 2005

a last drink with no ice

Bank posted:

Yup, that's basically what I did (see the edit I had). Inner join on itself and I basically had access to twice the number of columns. It felt really hacky, but I guess other than re-designing the table this is probably the easiest way to do it.

Depending on the engine you're using, you can probably use window functions and some filtering to do this too.

Gatac
Apr 22, 2008

Fifty Cent's next biopic.
Hey all, hoping for some input on unfucking a Data Warehouse loading process that I'm about to inherit. Both live system and DWH are Oracle 19c databases. Right now, the intended process is that at 9 PM every day (well after anything happens in the live system), the DWH pulls in full copies of about a dozen "core" tables from the live system into its loading schema and processes from there. However, that's about all that's going right as of now. To wit:

  1. The source data model gets messed with every now and then, which fucks up the load unless the change is communicated beforehand - a mark that's been missed a couple of times.
  2. To sorta-deal with 1, "old" data has been allowed to accumulate in the DWH's loading schema so we can rerun a "historic" load after fixing the problem. This has allowed tables there to grow to dozens of times the size of the actual live tables. The worst I've seen so far was a live table of about half a million rows resulting in 128 million rows in the loading table, representing 2/3rds of a year's worth of daily loads that had not been cleaned. Our DBAs are accordingly alarmed about the growth of the storage needs but the dude who's running it currently has all but thrown up their hands, claiming that there's not enough downtime to actually run a cleanup on the tables.
  3. Plenty of reports still run on the live system because having day-old data is useless for dashboards and the like. Some reports even join from the DWH to live data to accomplish this. Accordingly, badly optimized queries can still gently caress up our live system (and semi-frequently) do, but so far we haven't been able to kick the offenders off their access to the live DB because they do have a legitimate need for the data to do their reports.

I think what I want to achieve is a real-time DWH, but I freely admit I'm out of my depth here. How do I keep what's essentially copies of the live tables in the DWH loading schema with a "reasonable" lag?

So far I've looked at:
  • GoldenGate is apparently Oracle's replication middleware of choice but seems too hefty for what I want to achieve. Also further licensing costs which I'm pretty sure I won't be able to sell to the powers that be. Ditto paid third-party products.
  • DataGuard seems to provide a full clone of the source database, which is somewhat overkill for what I need. We do have an Enterprise license, though, so apparently we're already paying for that feature anyway? However, we recently converted the live system into a multitenant database (CDB/PDB) for the purposes of cloning production into test and dev, and it sounds like there are further issues to consider when DataGuarding here.
  • Refreshable PDB as copy of the live system seems much like the DataGuard solution with the further drawbacks that it's not continuously synched and also can't be accessed while it's being rebuilt.
  • Change Data Capture/Streams were used some ways back for a similar situation in-house and we have some relevant expertise in setting it up, but apparently those features are deprecated now and I don't think I want to recommend building something new with those.
  • Building my own CDC-analogue with blackjack and hookers advanced queueing and triggers. Honestly seems like the worst solution even at a glance from a reliability and setup complexity POV, but I'm sure it's technically possible.

Adding to the complications is that having real time (or near real time) data in the DWH means we would be loading while the live data is changing; how do I square having an always consistent overall version of all relevant tables in the DWH with the delays that arise from the replication?

And on the matter of data model changes in the source system: in a perfect world, they wouldn't happen outside of a coordinated release process, but we're still in the beginning stages of switching to that kind of forethought and planning. My pessimistic read is that the loading process needs to be as resilient as I can possibly make it because relying on anyone playing nice in this environment is a sucker's bet. Even right now, where there's way too much data lying around in the loading schema precisely because of this issue, that data is not complete as I see it because changes in the source objects mean new rows probably can't be inserted into the loading tables. It'd be cool if it was able to replicate DDL on the source tables to the mirrored tables in the DWH's loading schema; alternatively I would accept if it dumped source rows transformed into JSON or XML into a CLOB column, deferring the issue of data model up to the point where I need to process data out of the loading schema.

Any pointers/opinions/relevant resources from you guys?

Vicar
Oct 20, 2007

Do you store historical snapshots in the data warehouse, or load data from multiple sources into it? Or do you only really have a split between the live database and the data warehouse to prevent heavy analytical queries from impacting the live database? If it's the latter, it sounds like a read replica/standby would be the best and the simplest solution.

Gatac
Apr 22, 2008

Fifty Cent's next biopic.
We do store some historical data but I'm currently clearing up whether anyone actually needs it. Right now we're looking at the refreshable PDB method since Active Data Guard (the currently canonical "real-time read-only replica" method) costs $$$ in additional licensing.

SubponticatePoster
Aug 9, 2004

Every day takes figurin' out all over again how to fuckin' live.
Slippery Tilde
I got a promotion at work and will be doing SQL stuff but I don't have more than cursory knowledge (it's easier to learn how SQL works than our 25 year old mainframe which is why they hire internally lol). Is there a good online primer/intro course I can go through so I'm a bit ahead of things?

e: :tipshat:

SubponticatePoster fucked around with this message at 22:06 on Nov 11, 2021

Bank
Feb 20, 2004
I personally like these two:
https://www.codecademy.com/learn/learn-sql
https://www.w3schools.com/sql/default.asp

raminasi
Jan 25, 2005

a last drink with no ice
Once you've gotten more familiar with the syntax and want to understand a little better why some queries are slow and why some are fast, you can give this a shot. Beware: The formatting kind of hides it, but it's a whole-rear end book.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
How do I return the column names of the largest two values in a given row of data in an Aurora Postgres (12.7) table?

I am building a CDN monitoring system that pings each of 60 POPs from 20 locations and dumps the data into a table like this:

code:
hostname,	pop1,	pop2,	...,	pop60
host1		1.3   	3.4   	,...,   3.5
host2		3.6   	2.7   	,...,   4.4
host3		1.4   	3.5    	,...,	0.9
My query would return something like:

host, pop1, pop2
host2, pop2, pop1
host3, pop60, pop1

Is this doable with straight SQL?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
group by hostname, pivot columns to rows within a group, take the max of the rows within the group? I’d have to look if there’s an ANSI-compliant way to do pivot (ANSI SQL is incredibly limited and annoying and even then there’s variations in implementation details) but every decent SQL DB is going to offer that functionality regardless of ANSI-ness, perhaps just with slightly different syntax.

remember that depending on what you are doing, if this is part of something bigger, it may be easier to pull the annoying bits out into a view - you can prove the annoying bits work as expected, and potentially you can come back later and tune that bit if you’ve found a different way you want it to work (especially if that particular DB offers some specific non-ANSI functionality that helps you).

So you might have a view that gives you (hostname, minColumn, minValue, maxColumn, maxValue) and if you are using that as a part of a larger query you just join that. That approach also happens to give you a way to “encapsulate” non-portable queries or non-ORM’able queries - if you have to support three DBs, you can just have different definitions of that view for each DB, and if you need to use it as part of an ORM query (despite ORMs not necessarily supporting the syntax you need) you can define the view as an immutable table/entity in Hibernate or whatever, and then just join against it from HQL like a normal entity - you can have it as a lazy-loaded entity relationship, or if you want to avoid polluting your entities just do the SQL-style “JOIN MyReportView ON entity.hostname = MyReportView.hostname”.

(people hate ORMs but I’m of the opinion that ORM doesn’t have to be all or nothing, and if you can pull specific bits back to native SQL but still use ORM generally it can often be convenient to do so. Using “native query” functionality inside the ORM framework is fine if you just want a specific tuple, but if you want the whole object/POJO/etc then you’re writing the boilerplate code, which needs to be maintained going forward, and that’s where “views as ORM entities to encapsulate lovely parts of the logic” work well. It’s just that people drilling down to connection level and doing native poo poo there that makes me wary, we’ve had far too many fails from people who thought they were cute writing native JDBC queries but also didn’t understand the full contract of Connection/PreparedStatement/ResultSet/etc and ended up causing cursor leaks/etc, or not understanding the necessity of hibernate getting flushed before native queries leading to stuff being visible or not visible when it should/shouldn’t be. So in my book, if your complex query needs to pull back more than a tuple (i.e. needs full objects), and it can be made to work with the “join a view and pull back the rows that match some ORM query” pattern then I’ve generally leaned on people to do that just to avoid the hazards of things going on that Hibernate doesn’t know about. The ORM is bumper bowling for lovely developers: yeah you still suck but it keeps you going in the right direction and limits how bad the fallout can be.)

Paul MaudDib fucked around with this message at 22:02 on Nov 12, 2021

nielsm
Jun 1, 2009



Consider if you should normalize your schema.

I think it would look something like

Host (id , name)
Pop (id, name)
Measurement (id, time)
ResponseTime (measurement_id, host_id, pop_id, response_time)

PhantomOfTheCopier
Aug 13, 2008

Pikabooze!

Agrikk posted:

How do I return the column names of the largest two values in a given row of data in an Aurora Postgres (12.7) table?

I am building a CDN monitoring system that pings each of 60 POPs from 20 locations and dumps the data into a table like this:

code:
hostname,	pop1,	pop2,	...,	pop60
host1		1.3   	3.4   	,...,   3.5
host2		3.6   	2.7   	,...,   4.4
host3		1.4   	3.5    	,...,	0.9
My query would return something like:

host, pop1, pop2
host2, pop2, pop1
host3, pop60, pop1

Is this doable with straight SQL?
Usually you'd use rank() OVER (PARTITION BY hostname ORDER BY latency), but your data isn't normalized so you can't really do that. You can use a union query to select individual pops into proper rows of (host,popid, latency).

Having 60 of them sucks. Adding one more will suck more. The system tables can get the column names so you could leverage that to build a query to reshape the data.

Sorry, pheun posting so that's the best I can do right now.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

nielsm posted:

Consider if you should normalize your schema.

I think it would look something like

Host (id , name)
Pop (id, name)
Measurement (id, time)
ResponseTime (measurement_id, host_id, pop_id, response_time)

This is absolutely the right approach and it's amazingly simple now that you've brought it up.

When I create the table it seemed like a no brainer to make it an X-Y chart with Z as the time dimension, but then I got stuck in this situation and I realized that it didn't feel right without knowing why.

I'm going to try rebuilding the app, which will require some re-coding of the host nodes, but I think it'll be worth it.

Thanks!

Nth Doctor
Sep 7, 2010

Darkrai used Dream Eater!
It's super effective!


Agrikk posted:

This is absolutely the right approach and it's amazingly simple now that you've brought it up.

When I create the table it seemed like a no brainer to make it an X-Y chart with Z as the time dimension, but then I got stuck in this situation and I realized that it didn't feel right without knowing why.

I'm going to try rebuilding the app, which will require some re-coding of the host nodes, but I think it'll be worth it.

Thanks!

Normalization saves the day yet again

Dawncloack
Nov 26, 2007
ECKS DEE!
Nap Ghost
What is a good primer on postgresql roles and users and how to manage them?

I know the basics of mySQL (as much as I need anyway) but I need to apply the same knowledge to extending a system to also record stuff in postgre.

Problem is, that drat access control! I haven't gotten even to the point of creating a database and inserting things. I want to do this through the command line, I mean, psql in linux. And all the blog posts and explanations I find are either incomplete or waaaaaay over my head.

Thanks in advance!

PhantomOfTheCopier
Aug 13, 2008

Pikabooze!

Dawncloack posted:

What is a good primer on postgresql roles and users and how to manage them?

I know the basics of mySQL (as much as I need anyway) but I need to apply the same knowledge to extending a system to also record stuff in postgre.

Problem is, that drat access control! I haven't gotten even to the point of creating a database and inserting things. I want to do this through the command line, I mean, psql in linux. And all the blog posts and explanations I find are either incomplete or waaaaaay over my head.

Thanks in advance!
I can't tell if this is a getting started question (ie you just believe you need to understand roles).

https://www.postgresql.org/docs/14/tutorial-createdb.html

If you really think you need to be creating roles
https://www.postgresql.org/docs/14/user-manag.html

Or did you not configure the server?
https://www.postgresql.org/docs/14/auth-pg-hba-conf.html

It's easier to help if you describe what is working, and what it is actually doing if it seems wrong.

PhantomOfTheCopier fucked around with this message at 23:20 on Nov 23, 2021

Adbot
ADBOT LOVES YOU

Dawncloack
Nov 26, 2007
ECKS DEE!
Nap Ghost
Thank you for those!

It's a starting question of sorts. I have SQL notions but I with the psql command line program I havent even gotten to the point of creating a database and inserting. I keep getting "you don't have the privileges to do that ha ha!" errors.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply