Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
QuarkJets
Sep 8, 2008

Bundy posted:

Yeah non static typing in languages like Python, PHP et al was an exercise in saving time but it turns out it's fine until it introduces loving horrible and subtle bugs in code. The experiment failed, declaring types is better and strong typing should be a bare minimum (it is a good thing that Python does not coerce types for you).

Python is already strongly typed, but its variables are also dynamically typed. Php's variables are weakly typed

Adbot
ADBOT LOVES YOU

Data Graham
Dec 28, 2009

📈📊🍪😋



Coming to Python from Perl, .... lol

Bad Munki
Nov 4, 2008

We're all mad here.


Data Graham posted:

Coming to Python from Perl, .... lol

Amen, brother

e: I mean, that poo poo wasn’t just loose, it was promiscuous

Bad Munki fucked around with this message at 03:03 on Oct 18, 2020

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
I have some need manage a "%{nested %{tagged structure}%}%" inside some metadata string. Thanks to the nesting, I'm not keen on regular expressions. The first impulse would be to use a parsing library. I use ANTLR4 in personal work but this would be for my day job and I really can't expect anybody around me to work with it. I'm wondering if I even need a general parsing library and a grammar for what I'm trying to do. If there's a library for a syntax like that already then I'd prefer to leverage it instead of writing and maintaining my own parsing logic. Side note: I don't think treating it like an XML text field will be well-received and would particularly bonkers since the input is starting out as JSON.

Jakabite
Jul 31, 2010
Bought a book to work through to get into Python and already having some issues. I downloaded Anaconda and installed it to my external storage (not sure if relevant, do I have to use the drive Windows is on?), and one of the first things I did was write the usual Hello World. All fine. Then the book introduces the 'dir' command in the terminal, and rather than listing the files in the directory the output is just <function dir>, no matter what I do. What's happening goons?

Data Graham
Dec 28, 2009

📈📊🍪😋



"dir" isn't to get a directory listing, it's to list all the attributes of an object. Like "dir(object)" will give you a list of all the attributes and methods defined on it.

You want something like https://stackabuse.com/python-list-files-in-a-directory/


e: yeah if the book isn't making clear the distinction between the Python REPL (the interactive Python environment/shell) and the OS terminal, it isn't doing a very good job. But it's also not an obvious concept for a newbie.

Data Graham fucked around with this message at 16:03 on Oct 29, 2020

Dominoes
Sep 20, 2007

As Data points out, the dir function doesn't do what you're asking. Check out the official docs.

edit: I think your book is talking about a Unix/powershell console command, and is not directly related to Python.

Dominoes fucked around with this message at 15:51 on Oct 29, 2020

Jakabite
Jul 31, 2010
Huh, maybe I’ve misread the book I’m using. Thanks!

Wallet
Jun 19, 2006

Rocko Bonaparte posted:

Side note: I don't think treating it like an XML text field will be well-received and would particularly bonkers since the input is starting out as JSON.

Depending on how complicated the structure/data is I would be tempted to just write a function to turn it back into JSON and, if necessary, one to turn the JSON back into the tag structure.

NinpoEspiritoSanto
Oct 22, 2013




Jakabite posted:

Bought a book to work through to get into Python and already having some issues. I downloaded Anaconda and installed it to my external storage (not sure if relevant, do I have to use the drive Windows is on?), and one of the first things I did was write the usual Hello World. All fine. Then the book introduces the 'dir' command in the terminal, and rather than listing the files in the directory the output is just <function dir>, no matter what I do. What's happening goons?

If you're used to programming but new to Python the official tutorial is good. If you're new or newish to programming in general, Think Python 2e is a must imo.

Butter Activities
May 4, 2018

Automate the boring stuff was best purchase I ever made.

NinpoEspiritoSanto
Oct 22, 2013




SpaceSDoorGunner posted:

Automate the boring stuff was best purchase I ever made.

Yeah this is a good title as well. My post history in this thread should have some other worthy reads and links

Dominoes
Sep 20, 2007

Hey bros. Is there a good way to visualize data with 3 input dims and 1 output? Maybe with some sort of shading, or density of points, or threshold values for drawing surfaces.

(To define my terms. 1 in 1 out: Scatter/line plot. 2 in 1 out: surface or contour plot. 2 in 2 out: Vector plot in 2d. 3 in 3 out: Vector plot in 3d.)

Bad Munki
Nov 4, 2008

We're all mad here.


Like your 2-in 1-out, but add hue for the 3rd dim.

CarForumPoster
Jun 26, 2013

⚡POWER⚡

Dominoes posted:

Hey bros. Is there a good way to visualize data with 3 input dims and 1 output? Maybe with some sort of shading, or density of points, or threshold values for drawing surfaces.

(To define my terms. 1 in 1 out: Scatter/line plot. 2 in 1 out: surface or contour plot. 2 in 2 out: Vector plot in 2d. 3 in 3 out: Vector plot in 3d.)

Its hard to know without knowing the data types but if they are all ordinal categories/numeric data, you can put 4 features on a scatter plot easily with X Axis, Y Axis, Point color (green->red), point size.

Which of those should be your three inputs and which one should be your output kinda depends on the data, though I' suppose I default to Y-Axis or color as output variables.

Bad Munki
Nov 4, 2008

We're all mad here.


And bear in mind, every lovely meme that gets posted is showing 3 values at every x/y coordinate, you can use 1, 2, or 3 of those bands to your heart’s desire similarly.

post hole digger
Mar 21, 2011

Random question: I am working through Eric Matthes Python Crash Course right now and I've gotten a lot out of it. I'm currently going through the part about importing data from a csv and plotting it with matplotlib.

Here's a code sample, an example from the book:

code:
file = 'data/sitka_weather_07-2018_simple.csv'

with open(file) as f:
    reader = csv.reader(f)
    header_row = next(reader)

    dates, highs = [], []
    for row in reader:
        current_date = datetime.strptime(row[2], '%Y-%m-%d')
        high = int(row[5])
        dates.append(current_date)
        highs.append(high)

plt.style.use('seaborn')
fig, ax = plt.subplots()
ax.plot(dates, highs, c='red')
I'm wondering, is there a reason to put the put the data into two separate lists instead of a dictionary? It seems like the data is better organized this way:

code:
    data = {}
    for row in reader:
        current_date = datetime.strptime(row[2], '%Y-%m-%d')
        high = int(row[5])
        data[current_date] = high

plt.style.use('seaborn')
fig, ax = plt.subplots()
ax.plot(data.keys(), data.values(), c='red')
Since this is about downloading data and data visualization, is it a better practice to keep things in lists instead of dicts for some reason? Or is this more or less completely unimportant?

post hole digger fucked around with this message at 01:42 on Oct 30, 2020

CarForumPoster
Jun 26, 2013

⚡POWER⚡

my bitter bi rival posted:

Random question: I am working through Eric Matthes Python Crash Course right now and I've gotten a lot out of it. I'm currently going through the part about importing data from a csv and plotting it with matplotlib.

Here's a code sample, an example from the book:

code:
file = 'data/sitka_weather_07-2018_simple.csv'

with open(file) as f:
    reader = csv.reader(f)
    header_row = next(reader)

    dates, highs = [], []
    for row in reader:
        current_date = datetime.strptime(row[2], '%Y-%m-%d')
        high = int(row[5])
        dates.append(current_date)
        highs.append(high)

plt.style.use('seaborn')
fig, ax = plt.subplots()
ax.plot(dates, highs, c='red')
I'm wondering, is there a reason to put the put the data into two separate lists instead of a dictionary? It seems like the data is better organized this way:

code:
    data = {}
    for row in reader:
        current_date = datetime.strptime(row[2], '%Y-%m-%d')
        high = int(row[5])
        data[current_date] = high

plt.style.use('seaborn')
fig, ax = plt.subplots()
ax.plot(data.keys(), data.values(), c='red')
Since this is about downloading data and data visualization, is it a better practice to keep things in lists instead of dicts for some reason? Or is this more or less completely unimportant?

This is not my area of expertise but no it doesnt matter. Also, unless I have some big performance concern, I would use pandas for that whole bit of code by default.

That whole buncha lines could be as simple as:
code:
import pandas as pd

df = pd.read_csv('data/sitka_weather_07-2018_simple.csv')
df.plot(x='keys', y='values', color='red')
And in a jupyter notebook you can output matplotlib (which pandas wraps) inline with %matplotlib inline at the start of your notebook.

CarForumPoster fucked around with this message at 01:57 on Oct 30, 2020

NinpoEspiritoSanto
Oct 22, 2013




csv module also has the dictreader. Please don't reach for pandas whenever csv comes up, it's often overkill and in many other cases the task is far easier done using SQL.

CarForumPoster
Jun 26, 2013

⚡POWER⚡

Bundy posted:

csv module also has the dictreader. Please don't reach for pandas whenever csv comes up, it's often overkill and in many other cases the task is far easier done using SQL.

I probably care to much and am misinterpreting your post but it struck a nerve...

Software should be written to meet requirements that actually exist in reality, not in some programmers idea of what feels "efficient". The concept of "overkill" is stupid unless you actually have performance requirements. If I am serving a webpage to millions of users via flask, I'll generally not use pandas. When I'm making business dashboards that can take some extra time because they get checked less than once per day by one person, I use pandas a ton. When learning to code and/or making hobby projects, memory is free. Programming/learning time is not.

Starting out ~3 years ago, I definitely credit pandas with getting me from "oh maybe I'll learn python" to having started a company that relies on software developed almost exclusively in python for it's core functions. "I save 10 milliseconds at runtime and wrote 20 extra lines of code" is something I beat out of my CS interns.

CarForumPoster fucked around with this message at 02:14 on Oct 30, 2020

NinpoEspiritoSanto
Oct 22, 2013




I'll respond to the rest with reasoning re: pandas tomorrow when it's not gone 1am but it's not helpful to tell someone to throw away a part of a tutorial in favour of a library with a completely different API and quirks of its own, most decent tutorials tend to build on prior work.

Besides, stdlib csv module is useful to know and it isn't always possible to pip install a cheat code in a production environment. I'm not a LPTHW advocate by any means but the csv module is fine. I'm not a lone voice we often get issues with pandas users in #python on freenode because of its own quirks and it often just being garbage/nonsensical at the job it's being asked to do.

CarForumPoster
Jun 26, 2013

⚡POWER⚡

Bundy posted:

pandas its own quirks

Eh that’s a good point, pandas definitely has its own quarks. It’s a rabbit hole worth going down IMO but maybe not right now

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

Bundy posted:

csv module also has the dictreader. Please don't reach for pandas whenever csv comes up, it's often overkill and in many other cases the task is far easier done using SQL.

pandas is a lot easier to deal with than dictreader

especially when you need to parse strings and slap it into a frame like structure anyway

DoctorTristan
Mar 11, 2006

I would look up into your lifeless eyes and wave, like this. Can you and your associates arrange that for me, Mr. Morden?

my bitter bi rival posted:

Random question: I am working through Eric Matthes Python Crash Course right now and I've gotten a lot out of it. I'm currently going through the part about importing data from a csv and plotting it with matplotlib.

Here's a code sample, an example from the book:

code:
file = 'data/sitka_weather_07-2018_simple.csv'

with open(file) as f:
    reader = csv.reader(f)
    header_row = next(reader)

    dates, highs = [], []
    for row in reader:
        current_date = datetime.strptime(row[2], '%Y-%m-%d')
        high = int(row[5])
        dates.append(current_date)
        highs.append(high)

plt.style.use('seaborn')
fig, ax = plt.subplots()
ax.plot(dates, highs, c='red')
I'm wondering, is there a reason to put the put the data into two separate lists instead of a dictionary? It seems like the data is better organized this way:

code:
    data = {}
    for row in reader:
        current_date = datetime.strptime(row[2], '%Y-%m-%d')
        high = int(row[5])
        data[current_date] = high

plt.style.use('seaborn')
fig, ax = plt.subplots()
ax.plot(data.keys(), data.values(), c='red')
Since this is about downloading data and data visualization, is it a better practice to keep things in lists instead of dicts for some reason? Or is this more or less completely unimportant?

Using two lists has some downsides, but using a dict like this is an absolutely loving terrible idea for a number of reasons. I’ll limit myself to pointing out two.

Firstly, and most obviously, if there are multiple observations on the same date then you will lose data, since only the last such observation will be retained.

Secondly, pretty much every operation you might want to do on this data is significantly easier if they are in lists. For example - calculating the first differences in temperature highs is as trivial as
code:
high[:-1] -high[1:][
for a list, but is a pain in the arse if it is in a dict. Plotting one vs the other is similarly easier if both are lists.

This second point touches on the related issue that by putting the data into a dict like this you are throwing away the ordering. The source data file is ordered - there is a first row, a second row, etc - whereas dicts are not, so by putting this data in a dict this ordering is lost (In this example it may be recoverable if the date stamps are ordered with no duplicates, but that’s a very big if).

NinpoEspiritoSanto
Oct 22, 2013




Just to note dicts are ordered as of 3.6.

death cob for cutie
Dec 30, 2006

dwarves won't delve no more
too much splatting down on Zot:4

Bundy posted:

Just to note dicts are ordered as of 3.6.

hot drat, they are https://docs.python.org/3.6/whatsnew/3.6.html#new-dict-implementation

I was wondering why the last few times I was trying to show that dictionaries aren't ordered to students that it wasn't behaving

Hollow Talk
Feb 2, 2014
Also, lists of lists of tuples are nicer to use in list comprehensions since you don't need to do .items() every time.

Also not to start anything, but when we interview for people with limited experience, they always solve the task we set with pandas plus jupyter, and so far, they all tell you they "know some data science" etc, yet these are real problems:

- The CSV they get has semicolon delimiters
- The CSV has timestamps that pandas doesn't convert automatically

Pandas is a good tool, but for the love of God, please know how to parse a CSV, JSON, XML, and timestamps via standard library if you describe yourself as anything "data". And how to serialise data to CSV and to JSON. Once you understand the pros and cons, do use pandas when appropriate. Also, learn SQL. Do it. Do iiiiit!

Incidentally, the task they get is otherwise not very good, and I'm still waiting for somebody who has the nerve to just do it in awk or something. Also, incidentally, I already put in my 3 months notice. :yotj:

NinpoEspiritoSanto
Oct 22, 2013




Hollow Talk posted:

Also, lists of lists of tuples are nicer to use in list comprehensions since you don't need to do .items() every time.

Also not to start anything, but when we interview for people with limited experience, they always solve the task we set with pandas plus jupyter, and so far, they all tell you they "know some data science" etc, yet these are real problems:

- The CSV they get has semicolon delimiters
- The CSV has timestamps that pandas doesn't convert automatically

Pandas is a good tool, but for the love of God, please know how to parse a CSV, JSON, XML, and timestamps via standard library if you describe yourself as anything "data". And how to serialise data to CSV and to JSON. Once you understand the pros and cons, do use pandas when appropriate. Also, learn SQL. Do it. Do iiiiit!

Incidentally, the task they get is otherwise not very good, and I'm still waiting for somebody who has the nerve to just do it in awk or something. Also, incidentally, I already put in my 3 months notice. :yotj:

This is a good post. As someone that recently suffered through a 3 month notice, good luck OP

CarForumPoster
Jun 26, 2013

⚡POWER⚡

Hollow Talk posted:

- The CSV they get has semicolon delimiters
- The CSV has timestamps that pandas doesn't convert automatically

Pandas is a good tool, but for the love of God, please know how to parse a CSV, JSON, XML, and timestamps via standard library if you describe yourself as anything "data".

I already put in my 3 months notice. :yotj:

Putting in more than 2, sometimes 4, weeks notice is haram.

df = pd.read_csv(sep=";") can read semicolon delimited files and pd.to_datetime(df['col', format="%Y/%m/%d") converts date times trivially. You can also pass a lambda function to read_csv to parse dates like so:
code:
from datetime import datetime

custom_date_parser = lambda x: datetime.strptime(x, '%Y-%d-%m %H:%M:%S')
df = pd.read_csv('data.csv',
                sep=';'
                parse_dates=['date'],
                date_parser=custom_date_parser)
...that said you'd obviously have to know how to parse dates using the std lib and I completely agree that the lack of being able to properly work with JSON and XML data for "data" people is a huge problem, whether one uses the std lib or not.

I'm on a soapbox about this because in the absence of specific requirements for either, writing maintainable, readable code should be more important than writing performant code so long as the performance is "good enough".

Wallet
Jun 19, 2006

Bundy posted:

Just to note dicts are ordered as of 3.6.

I totally missed this and it's blowing my mind.

Dominoes
Sep 20, 2007

Bad Munki posted:

Like your 2-in 1-out, but add hue for the 3rd dim.
Thanks. That would convey the info, ie a colored surface plot.


CarForumPoster posted:

Its hard to know without knowing the data types but if they are all ordinal categories/numeric data, you can put 4 features on a scatter plot easily with X Axis, Y Axis, Point color (green->red), point size.

Which of those should be your three inputs and which one should be your output kinda depends on the data, though I' suppose I default to Y-Axis or color as output variables.
All numeric data types. The colored scatter would work too, and is conceptually similar to the colored surface. Although the output has a value for any combo of inputs. I'm trying to visualize molecular orbitals, so ideally, it would be some sort of hazy 3d object, or perhaps (This is common:) a contour plot up a dimension. Ie bound 3d solids with a threshhold value. (Although this would be very sensitive to the threshold). I worry that the colored surface plot or scatter wouldn't get great at building visual intuition for what the orbitals look like. Although maybe a modified scatter, where instead of a uniform set of points with diff colors, for each sample point, you drop a different density of scattered points, so you see many points near a high value, and none near 0.

I think ultimately, the answer will have to be some sort of custom rendering. Refreshing Vulkan skills. Does this seem overkill/over-engineered? Yes. Is this one of those cases where you have to fight that instinct and just do it? Leaning yes.




Bad Munki posted:

And bear in mind, every lovely meme that gets posted is showing 3 values at every x/y coordinate, you can use 1, 2, or 3 of those bands to your heart’s desire similarly.
True!

Dominoes fucked around with this message at 15:30 on Oct 30, 2020

CarForumPoster
Jun 26, 2013

⚡POWER⚡

Dominoes posted:

Thanks. That would convey the info, ie a colored surface plot.

All numeric data types. The colored scatter would work too, and is conceptually similar to the colored surface. Although the output has a value for any combo of inputs. I'm trying to visualize molecular orbitals, so ideally, it would be some sort of hazy 3d object, or perhaps (This is common:) a contour plot up a dimension. Ie bound 3d solids with a threshhold value. (Although this would be very sensitive to the threshold). I worry that the colored surface plot or scatter wouldn't get great at building visual intuition for what the orbitals look like. Although maybe a modified scatter, where instead of a uniform set of points with diff colors, for each sample point, you drop a different density of scattered points, so you see many points near a high value, and none near 0.

I think ultimately, the answer will have to be some sort of custom rendering. Refreshing Vulkan skills. Does this seem overkill/over-engineered? Yes. Is this one of those cases where you have to fight that instinct and just do it? Leaning yes.

True!

My (limited!) experience with data viz has always been that I NEVER make good graphs on the first try. I use this workflow:

1) Make the same plot 5 different ways...e.g. scatter plots with different visual setups, response surface/contour map, histograms, break one graph into two, etc.
2) Show it to someone else. Ask them "What do these plots show?" without telling them what its about.
3) Silently get frustrated that none of them are clear enough and agonize quietly while they try to guess and their guesses are WRONG WRONG WRONG.
4) When they're all too bad to be obvious, and graphs should be obvious that is the point, I ask them: "Which one tells you that _____ is the most important/best/whatever factor/outcome?"
5) Have them pick a top 1 or two, rework them and show them again.
6) Repeat steps 1 through 4 with a person who hasn't seem them already.

Also lol @ trying to represent 3 factors and an output on one graph. Prepare for steps 1 through 5 like 3 times.

EDIT: One exception to the above lol is when the 3 factors and output are in some easily thought of coordinate system like (X, Y, Z, output) or (range, azimuth elevation, power in dB)

EDIT2: Just post your graphs. Its my first day off in a while and I am day drunk posting about python.

CarForumPoster fucked around with this message at 16:49 on Oct 30, 2020

Dominoes
Sep 20, 2007

CarForumPoster posted:

EDIT: One exception to the above lol is when the 3 factors and output are in some easily thought of coordinate system like (X, Y, Z, output) or (range, azimuth elevation, power in dB)
In this case, they do correspond to the 3 space dimensions! The output is a scalar field value. (electron wave function, or probability-density function.

quote:

EDIT2: Just post your graphs. Its my first day off in a while and I am day drunk posting about python.
Here's a 2D example. I don't yet have the 3d computations done, but it will be an extension of things this in 1 extra dimension:


The orbitals table images on this page are one way the 3d graph might look.

Hollow Talk
Feb 2, 2014

CarForumPoster posted:

Putting in more than 2, sometimes 4, weeks notice is haram.

df = pd.read_csv(sep=";") can read semicolon delimited files and pd.to_datetime(df['col', format="%Y/%m/%d") converts date times trivially. You can also pass a lambda function to read_csv to parse dates like so:
code:
from datetime import datetime

custom_date_parser = lambda x: datetime.strptime(x, '%Y-%d-%m %H:%M:%S')
df = pd.read_csv('data.csv',
                sep=';'
                parse_dates=['date'],
                date_parser=custom_date_parser)
...that said you'd obviously have to know how to parse dates using the std lib and I completely agree that the lack of being able to properly work with JSON and XML data for "data" people is a huge problem, whether one uses the std lib or not.

I'm on a soapbox about this because in the absence of specific requirements for either, writing maintainable, readable code should be more important than writing performant code so long as the performance is "good enough".

Some people live in places with actual labour laws that go both ways regarding work contracts. :haw:

But yes, pandas can do those things and many more, but I feel once "use pandas" becomes an automatism or the default suggestion, it will be a bit more likely that users who have only ever(!) relied on automatic behaviour tend to have similar holes in their knowledge. As I said, figure out how to do it "manually", library-assisted, and with pandas, then choose as appropriate in any given project.

CarForumPoster
Jun 26, 2013

⚡POWER⚡

Dominoes posted:

In this case, they do correspond to the 3 space dimensions! The output is a scalar field value. (electron wave function, or probability-density function.

Here's a 2D example. I don't yet have the 3d computations done, but it will be an extension of things this in 1 extra dimension:


The orbitals table images on this page are one way the 3d graph might look.

Ohhhhhh...why?

If the values in 3D matter because you're going to use them for some further-down-the-line computation and you want the user to be able to reason about that scalar by looking at it...sure that makes sense. They dont really NEED this graph, it just helps in fact checking themselves about a hard science problem and your users are all hard science people who will invest the time in understanding your bad graphs.

That said, if I am understanding you properly, and I remind you...I am a little drunk...GTA V does a similar thing with the police search volumes on the map. You can see this in the video below at time 3:33 where although it looks like the search volume is a pie shaped cross section of a cylinder, the actual search volume is a cross section of a hemisphere with the police line of sight being the widest part of the hemisphere. IMO video games tend to be really REALLY good at data viz. Even though this sint a perfectly accurate representation. it is useful to the user trying to get away with murder reason about the problem.

https://www.youtube.com/watch?v=ig63EIyq3mo

What I am saying is, does it really need to be in 3D? I have a masters degree in engineering and it took me going to wikipedia, thinking about it, and reading your post twice to understand your graph. The answer may well be yes, but you should expect some users to think they dont get it and quit.

EDIT:

My first thought for how to solve this is to present two 2-D cross sections rather than attempt thign with one 3D graph but I remind you that I am ALWAYS wrong about data viz on the first go.

CarForumPoster fucked around with this message at 18:00 on Oct 30, 2020

QuarkJets
Sep 8, 2008

Another way to think of it is that showing a 2D representation of a 3D heat map is always going to result in a 2D slice anyway. If you want a user to build visual intuition about the 3D shape, then you either need to give them access to the 3D heat map so that they can alter the viewing position themselves or give them enough 2D slices of the object for them to build some visual intuition.

SurgicalOntologist
Jun 17, 2004

My first ambitious coding project (almost 15 years ago) was a visualizer of ND datawhere you could set 3 dimensions as x, y, z, the rest become sliders, and you could navigate the point cloud with WASD or something and/or take different slices with the sliders. I think it was useful for my research group but also way overkill and the interface sucked (it was in Matlab :barf:).

Anyways, surely something like this, but good, exists. Have you checked out Holoviews for example? I'm not familiar with it but it's the first thing that came to mind, maybe it allows you to do something like I did back in the day.

But yes, allowing manual exploration of the data is key to getting a sense of it.

Bad Munki
Nov 4, 2008

We're all mad here.


Sounds like you need to get into VR

SurgicalOntologist
Jun 17, 2004

Haha, I was in VR at the time, actually (a lowly lab manager messing with Matlab when there was no equipment to order).

Adbot
ADBOT LOVES YOU

QuarkJets
Sep 8, 2008

Hey fellow users of Anaconda

I recall someone pointing out that Anaconda changed their TOS a month ago alluding to commercial entities having to buy a license, and now they have announced Anaconda Commercial Edition and posted a FAQ for what that means

https://www.anaconda.com/blog/anaconda-commercial-edition-faq

They spell it out plainly: if you do not have a business relationship with Anaconda and are using Anaconda's repositories in a for-profit or government organization with over 200 employees, then you are not in compliance.

Good luck navigating this, everyone!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply