Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS
Pretty sure BigRedDot wrote bokeh

Adbot
ADBOT LOVES YOU

pmchem
Jan 22, 2010


dear bigreddot please include pylint by default in anaconda installers, I have to fill out paperwork that gets signed by several people each time I want a tiny little change in my software

BigRedDot
Mar 6, 2008

cingulate posted:

Yes, I already have the MKL and actually just compiled R to make use of it (... instead of going the comfortable route and downloading Revolution Analytics' R distribution).
I've also set up a few conda envs (thanks to this thread) - though may I ask in this context how I can remove an entire environment at once over the CLI?

I'm just wondering if a mostly computer-illiterate person such as I should even bother trying to get MKL and an Anaconda Python to play along nicely by hand (e.g. by manually compiling Numpy), or if I should just go for my credit card.

I understand correctly you're working for continuum.io?
Yes, I am one of the original employees. It's been a very exciting and busy three years! To remove an environment you can use conda env remove myenv but honestly I mostly just do rm -rf ~/anaconda/myenv since that's really all it means to remove an environment.

As for compiling it yourself, I mean if you like tinkering or would like to learn about building stuff like this, then sure I suppose there's no reason not to give it a shot. The NumPy site.cfg (the file you need to change before you build) actually has a section commented out with some instructions on how to build with MKL. If you want to try to build a real conda package, you can even check out the conda recipe that is used to build mkl numpy on github: https://github.com/conda/conda-recipes/tree/master/numpy-mkl OTOH you'll probably spend more than 30 bucks of your time getting it done, so if you just want it to work, now...

Blinkz0rz posted:

Pretty sure BigRedDot wrote bokeh

Hah! Would that I could take credit for the whole thing. :) Bokeh has become a fairly large project and is absolutely a team effort. Especially happy that we seem to be getting more new contributors lately. I will toot my own horn about one thing though: I just wrote a bunch of Sphinx extensions for bokeh, in particular you can inline bokeh plots directly into sphinx .rst files. Will be available in a dev build later this week or the 0.8 release next month.

pmchem posted:

dear bigreddot please include pylint by default in anaconda installers, I have to fill out paperwork that gets signed by several people each time I want a tiny little change in my software
Work in a closed room? I feel your pain, would not want to go back to that kind of environment, myself. I will pass this along to the Anaconda guys but the installers are already quite large now, it's pretty hard to get them to add new things anymore. I will mention that you can install conda packages from local directories. Maybe you could a pylint package approved once, and then keep a copy on the closed network to re-use? That's OK some places, but not others. I don't know your network or policies so just mentioning it in case it is useful and permissible. We do work with clients that have closed rooms, to create custom installers or restricted repo servers, but the cost structure for that is geared toward larger groups/wider deployments, rather than individual users.

Cingulate
Oct 23, 2012

by Fluffdaddy
dear bigreddot please get the conda statsmodels package to the recent (0.6.1) version I need the ordinal GEE api

Also I'll ask my dadboss to shell over some money for MKL Optimizations.

Edit:

BigRedDot posted:

I work for Continuum, and I wrote the original version of conda
:colbert:

I've started using Anaconda for all my Python needs and have just recommended our newest PhD students to set up their systems using Anaconda

Cingulate fucked around with this message at 10:23 on Jan 28, 2015

QuarkJets
Sep 8, 2008

While you're here, what's the typical lag time between NVidia releasing a new CUDA compute capability (such as the GTX 980 with CC 5.2) and NumbaPro supporting those new features?

BigRedDot
Mar 6, 2008

Cingulate posted:

dear bigreddot please get the conda statsmodels package to the recent (0.6.1) version I need the ordinal GEE api
It looks like it already is? :)
code:
bryan@laptop (git:feature/charts_inherits_plot) ~/work/bokeh/examples/charts $ conda update statsmodels
Fetching package metadata: .......
Solving package specifications: .
Package plan for installation in environment /Users/bryan/anaconda/envs/bokeh_docs:

The following packages will be downloaded:

    package                    |            build
    ---------------------------|-----------------
    setuptools-12.0.5          |           py27_0         436 KB
    statsmodels-0.6.1          |       np19py27_0         4.6 MB
    ------------------------------------------------------------
                                           Total:         5.0 MB

The following packages will be UPDATED:

    setuptools:  12.0.4-py27_0    --> 12.0.5-py27_0   
    statsmodels: 0.5.0-np19py27_2 --> 0.6.1-np19py27_0

Proceed ([y]/n)? 

Fetching packages ...
setuptools-12. 100% |###########################################################| Time: 0:00:00   1.25 MB/s
statsmodels-0. 100% |###########################################################| Time: 0:00:02   2.41 MB/s
Extracting packages ...
[      COMPLETE      ] |#############################################################| 100%
Unlinking packages ...
[      COMPLETE      ] |#############################################################| 100%
Linking packages ...
[      COMPLETE      ] |#############################################################| 100%
EDIT I do seem to recall there might be some problem with the very newest SciPy and statsmodels? (or was?) That was an upstream issue though, I'm not sure if it has been resolved offend yet or not.

Quarkjets I posed the question, I'll let you know what I hear back.

BigRedDot fucked around with this message at 16:47 on Jan 28, 2015

EAT THE EGGS RICOLA
May 29, 2008

What's the best option for me to generate documentation for a module based on the docstrings? Sphinx?

BigRedDot
Mar 6, 2008

Quarkjets, this was the reply:

quote:

A couple facets to the answer: With the next Numba and NumbaPro release, we should be forward compatible to future compute capabilities as long as the user has an up to date NVIDIA driver installed.

That said, newer releases of the CUDA compiler may generate better code for newer compute capabilities. The lag for upgrading the CUDA compiler in Numba is probably 6 months.

We are getting a little squeezed as NVIDIA has been very aggressively deprecating different architectures in their toolkit releases that we would like to continue to support. For example, the latest CUDA 7 release has dropped all 32-bit platforms except ARM.

BigRedDot
Mar 6, 2008

EAT THE EGGS RICOLA posted:

What's the best option for me to generate documentation for a module based on the docstrings? Sphinx?
Sphinx, together with the [url=http://sphinx-doc.org/ext/autodoc.html]autodoc extension[url] (its included with sphinx nowadays) is what you want.

If you want to document a few modules on one page, you'll often end up with .rst sources that look similar to this:
code:
Resources and Embedding
=======================

.. contents::
    :local:
    :depth: 2

.. _bokeh.resources:

``bokeh.resources``
-------------------

.. automodule:: bokeh.resources
  :members:


.. _bokeh.embed:

``bokeh.embed``
---------------

.. automodule:: bokeh.embed
  :members:

.. _bokeh.templates:

``bokeh.templates``
-------------------

.. automodule:: bokeh.templates
I understand why it doesn't, but I still kind of wish automodule had an option to auto-generate section titles sometimes.

Edit: I have had to learn waaaaay more about sphinx and docutils internals than I ever would had though (because their docs kind of suck.... irony?), so if you have specific questions let me know.

BigRedDot fucked around with this message at 18:12 on Jan 28, 2015

EAT THE EGGS RICOLA
May 29, 2008

BigRedDot posted:


Edit: I have had to learn waaaaay more about sphinx and docutils internals than I ever would had though (because their docs kind of suck.... irony?), so if you have specific questions let me know.

Cool, thanks. I've used it before a bunch of times, was more checking that I wasn't missing some new and wonderful thing for documentation.

Cingulate
Oct 23, 2012

by Fluffdaddy

BigRedDot posted:

It looks like it already is? :)
I SWEAR it was still at 0.5 when I checked.

Dominoes
Sep 20, 2007

Looking for advice on handling an incoming data stream. I'm dealing with information from a stock broker. My previous setup made GET and POST calls using requests. I'd ask the server for data, and I'd get JSON. Now I'm dealing with a server that sends a stream of messages.

The connection can assign 'watcher' functions that take a single input: a text message from the data stream, and do something with it. Ie print it, log it, etc. Whenever a new message comes in, the function processes it automatically, in its own thread I think. What's the best way to capture this data?

Here's an implementation I made that finds data that arrives in an arbitrary order when requested, and matches it up as soon as it has the info. It's pretty messy, and I'm assuming there's a nicer or more generally applicable way. I think I need some way to make it time out if unsuccessful. Maybe a timer in a new thread that can kill the loop?

Python code:
    # I send a request sent to the server for data here.

    done_flag = False
    while not done_flag:
        # Sleeping seems to give the watcher a chance to respond? It's slower without
        # the sleep.
        time.sleep(.001)
        for contract in contracts_enum:
            # If conId already found for this contract, it will have three elements.
            if len(contract) == 3:
                continue
            for details in cdb.buffer:
                if details.reqId == contract[0]:
                    contract.append(details.contractDetails.m_summary.m_conId)
        num_ids_found = sum(map(lambda x: len(x) == 3, contracts_enum))
        if num_ids_found == len(contracts_enum):
            done_flag = True
The watcher logs each new message to a buffer (a list). In this case, a unique buffer for only the type of data I just requested. I loop through the data indefinitely until I find what I need, then kill the loop with a flag. The loop might go through 300 times or so in half a second waiting for all the requested data to hit the buffer. I feel like the answer might have something to do with coroutines.

Dominoes fucked around with this message at 21:29 on Jan 28, 2015

SurgicalOntologist
Jun 17, 2004

I would use asyncio, although that may entail making lots of other changes.

Python code:
@asyncio.coroutine
def monitor_stream(...):
    while True:
        result = yield from streaming_thing
        do_something_with(result)

Dominoes
Sep 20, 2007

Thanks; that library seems like exactly what I'm looking for. The docs are a bit daunting, but giving it a shot.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Dominoes posted:

Thanks; that library seems like exactly what I'm looking for. The docs are a bit daunting, but giving it a shot.

I'm lazy and barely thought about what you're doing, but look into gevent or eventlet and see if they're something that does something.

Dominoes
Sep 20, 2007

Thermopyle posted:

I'm lazy and barely thought about what you're doing, but look into gevent or eventlet and see if they're something that does something.
Cool, looking at those too.

Dominoes fucked around with this message at 00:12 on Jan 30, 2015

PongAtari
May 9, 2003
Hurry, hurry, hurry, try my rice and curry.
Edit: Never mind, answered my own question.

PongAtari fucked around with this message at 15:08 on Jan 30, 2015

Lazerbeam
Feb 4, 2011

Stupid question: Does upgrading from 3.3.0 to 3.4.2 mean I'll have to change any of my code?

Dominoes
Sep 20, 2007

No. New versions (other than going from 2 to 3) are backwards compatible, unless they use packages marked as on a provisional basis. Not all 3.4 code will work with 3.3, but 3.3 code will work on 3.4.

QuarkJets
Sep 8, 2008

Lazerbeam posted:

Stupid question: Does upgrading from 3.3.0 to 3.4.2 mean I'll have to change any of my code?

Here's a comparison of the major differences going from 3.3 to 3.4.2

https://www.python.org/downloads/release/python-342/

It's mostly additions. Any changes are extremely low-level, so you probably won't have to change any of your code unless you're doing something extremely weird

salisbury shake
Dec 27, 2011

Lazerbeam posted:

Stupid question: Does upgrading from 3.3.0 to 3.4.2 mean I'll have to change any of my code?

Stuff was deprecated, you probably didn't use them. There were some C API changes. So almost certainly not.
https://docs.python.org/3/whatsnew/3.4.html#deprecated-3-4

E: welp thanks awful.apk

Lazerbeam
Feb 4, 2011

Just upgraded and my stuff seems to work. Thanks :)

Illusive Fuck Man
Jul 5, 2004
RIP John McCain feel better xoxo 💋 ðŸ™Â
Taco Defender
How do I add a self-signed certificate to somewhere that python will trust it for https? this is on a centos machine.

Eg: My company has a self-signed certificate we use for testing stuff. I've trusted this certificate on my machine every way I know how.
I can go to https://test.company.com (which serves the self signed certificate), and I don't get a browser warning. I can curl https://test.company.com and it works. I can openssl s_client test.company.com and it works. If I use this python tool, I get loving "ERROR (SSLError): [Errno 1] _ssl.c:492: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed"

edit: had to add it to /usr/lib/python2.6/site-packages/requests/cacert.pem

Illusive Fuck Man fucked around with this message at 20:06 on Feb 2, 2015

Cingulate
Oct 23, 2012

by Fluffdaddy
I don't get parallel for loop syntax. At all. Like, I keep starring at the documentation to multiprocessing or whatever, and it's all Greek.

I just want to do something like
parfor x in range(0,10): my_list[x] = (foo(x))

FWIW, this is on Python 2.7. IPython, that is.

salisbury shake
Dec 27, 2011

Cingulate posted:

I don't get parallel for loop syntax. At all. Like, I keep starring at the documentation to multiprocessing or whatever, and it's all Greek.

I just want to do something like
parfor x in range(0,10): my_list[x] = (foo(x))

FWIW, this is on Python 2.7. IPython, that is.

What are you trying to accomplish?

Python code:
from multiprocessing import Pool

with Pool(NUM_OF_PROCESSES) as pool:
  for x in range(0 ,10):
    my_list[x] = pool.apply_async(foo, x)
  
  # or (i forget if it preserves order)
  my_list = list(pool.imap(foo, range(0, 10)))
Does what you want in a literal sense. Unless you're outsourcing heavy IO or computations, forking/starting a bunch of interpreters that need to serialize their results between processes might become a bottleneck.

Also the standard warning about blocking in Python. Use the non-blocking methods multiprocessing provides you if you're avoiding threading because of blocking slow downs.

salisbury shake fucked around with this message at 21:01 on Feb 2, 2015

vikingstrike
Sep 23, 2007

whats happening, captain

Cingulate posted:

I don't get parallel for loop syntax. At all. Like, I keep starring at the documentation to multiprocessing or whatever, and it's all Greek.

I just want to do something like
parfor x in range(0,10): my_list[x] = (foo(x))

FWIW, this is on Python 2.7. IPython, that is.

This article may be helpful: http://chriskiehl.com/article/parallelism-in-one-line/ In particular, using the map() function on a multiprocessing Pool() object at the very end.

BlackMK4
Aug 23, 2006

wat.
Megamarm
e: nm

onionradish
Jul 6, 2006

That's spicy.
I'm trying to extend my unit/functional testing and do some development that doesn't hit live URLs for my web scraping scripts. I'm struggling to understand how to monkeypatch or mock/stub the requests module. The answers I'm finding via StackOverflow and web search are incomplete or just punt the solution down the field to recommend additional libraries like responses/HTTPretty.

All I want for my basic testing/development is to override the requests.get() method to return an object with a .content attribute that I'd fill with a local HTML file/string, but I can't find a complete example. I'd like to see a solution that uses the mock module -- since it's part of the standard 3.x library -- so I can apply the knowledge to other objects or attributes down the road using a common module rather than some custom fake class I can cobble together.

Python code:
# EXAMPLE OVER-SIMPLIFIED CODE
def parse(url):
    r = requests.get(url)  # HTML is delivered in r.content
    return some_parsed_value(r.content)  # lxml or whatever extracts some value
    
# EXAMPLE TEST
def test_parsed():
    # assume something that maps a url to a local HTML file for content
    assert parse(url) == 'xyz'
Any references or sample code for this kind of testing?

EDIT: I've been able to get the desired results by making fake classes in the test module and manually overriding methods and attributes, but it doesn't seem like the best way to address the issue, especially if I ever had to work with others on the same codebase.

onionradish fucked around with this message at 20:12 on Feb 4, 2015

Space Kablooey
May 6, 2009



Mocking is something like this:

Python code:
import requests
from unittest import mock

class MockResponse():
    content = 'xyz'

# EXAMPLE OVER-SIMPLIFIED CODE
@mock.patch('requests.get')
def parse(url, mock_get):
    mock_get.return_value = MockResponse()
    r = requests.get(url)  # HTML is delivered in r.content
    return some_parsed_value(r.content)  # lxml or whatever extracts some value

def some_parsed_value(value):
    return value

# EXAMPLE TEST
def test_parsed(url):
    # assume something that maps a url to a local HTML file for content
    print(parse(url) == 'xyz')

test_parsed('url')
As long you do import requests in the modules that you want to test, that of mocking will most certainly work.

The tricky part of mocking is that you mock where the imported function being used, not where it was declared. For example:

Python code:
#main_module.py

from unittest import mock

from another_module import another_thing


@mock.patch('another_module.thing')
def mocked(mock_thing):
    mock_thing.return_value = 'yep'
    print(another_thing() == 'yep')

mocked()

#submodule.py
def thing():
    return 'nope'

#another_module.py
from submodule import thing


def another_thing():
    return thing()

onionradish
Jul 6, 2006

That's spicy.
Thanks -- I'd been making it way more complicated than necessary!

Cingulate
Oct 23, 2012

by Fluffdaddy

vikingstrike posted:

This article may be helpful: http://chriskiehl.com/article/parallelism-in-one-line/ In particular, using the map() function on a multiprocessing Pool() object at the very end.
Thank you, it actually did.


salisbury shake posted:

What are you trying to accomplish?
Trying to do this without having to use map().

:(

salisbury shake
Dec 27, 2011

Cingulate posted:

Thank you, it actually did.
Trying to do this without having to use map().

:(

You can use a loop and accomplish the same thing with Pool.apply() or apply_async()

onionradish
Jul 6, 2006

That's spicy.
I'm running 2.7.x on Windows and have an sqlite3 annoyance.

The version of the sqlite3.dll that comes bundled with Python 2.7 on Windows (at least) is not current so some of my scripts that need the new DLL fail (like access to the Firefox bookmark Sqlite database). Up to now, I've just manually replaced the default sqlite3.dll file in C:\Python27\DLLs with a newer one manually downloaded from the sqlite site. Of course, I have to remember to do this on every time there's a Python upgrade or I install from scratch.

Is there a "proper" way to override the default sqlite3 DLL or handle this other than what I'm doing? And, as a rant, why the hell hasn't the Python 2.7 Windows distro been updated to use the latest DLL in the first place? I just updated to 2.7.9 and got bit in the rear end with this yet again.

QuarkJets
Sep 8, 2008

onionradish posted:

I'm running 2.7.x on Windows and have an sqlite3 annoyance.

The version of the sqlite3.dll that comes bundled with Python 2.7 on Windows (at least) is not current so some of my scripts that need the new DLL fail (like access to the Firefox bookmark Sqlite database). Up to now, I've just manually replaced the default sqlite3.dll file in C:\Python27\DLLs with a newer one manually downloaded from the sqlite site. Of course, I have to remember to do this on every time there's a Python upgrade or I install from scratch.

Is there a "proper" way to override the default sqlite3 DLL or handle this other than what I'm doing? And, as a rant, why the hell hasn't the Python 2.7 Windows distro been updated to use the latest DLL in the first place? I just updated to 2.7.9 and got bit in the rear end with this yet again.

I don't really know the answers to your question, but have you checked to see whether Python 3.4 has the latest DLL? That might kill two birds with one stone, since 2.7 is kind of becoming deprecated

Also, have you tried using Anaconda? It might have the latest DLL

duck monster
Dec 15, 2004

pmchem posted:

dear bigreddot please include pylint by default in anaconda installers, I have to fill out paperwork that gets signed by several people each time I want a tiny little change in my software

I'm so glad I'm not working govt science anymore. This was literally the bane of my existence. I managed to cause an interdepartmental shitfight that lead to a senior bureacrat resigning in protest after I committed an urgent code change live to fix an error in wind speed calculation without going through UAT and all the blah blah committees and change request and bulshit ITIL juggling since loving firefighters where in danger by the calculation being wrong, but when your a bureacrat having your triplicate filled CRF go through the proper channels and get stamped by middle managers in all 7 circles of hell is more important than not having dead firemen. gently caress that poo poo!

Cingulate
Oct 23, 2012

by Fluffdaddy
Okay, I think I basically got the multiprocessing thing now - my problem was that I was trying to avoid a functional style, but once I stopped trying to make everything be a for loop, it started making sense.

However - the thing I want to parallelise is already parallelised. What I mean is, I have a function that inherently utilises 10 or so of our (>100) cores. I want to run multiple instances of that function in parallel, to get closer to utilising 50 or so of our cores (and no, I can't really make the functions themselves able to parallelise more efficiently).
Basically, I want to apply a large decomposition to large datasets. I have 20 independent large datasets and want to process them in parallel. But the decomposition function is already mildly parallelised.

When I simply do what's explained in vikingstrike's link (multiprocessing.dummy.pool), I actually make everything much slower because the individual sessions only utilise 1 core.
Can I somehow parallelise parallelised functions (execute multiple instances of a parallelised function in parallel)?

Am I making sense?

ArcticZombie
Sep 15, 2010
In the scrabble solver I'm working on, there's a big difference in the time it takes to compute moves when it's running by itself and when it's running as part of the Flask web app.

For example, given the following game:



I ran the computation 10 times both on it's own and as part of the Flask app. The Flask app is importing the same module, instantiating the same classes and calling the same methods as the standalone test. Calculating every legal move, these are the results:

code:
Scrubble running in Flask (s):
10.107224941253662
18.127454042434692
21.389935970306396
20.2083158493042
17.303011178970337
19.36518907546997
20.441646099090576
22.377758026123047
19.84891104698181
21.805835008621216

Just Scrubble (s):
8.892906904220581
8.91041898727417
8.889863014221191
8.8021240234375
8.782242059707642
8.919314861297607
8.81180715560913
8.797425985336304
9.098114967346191
9.285434007644653
It only seems to have such a large difference when blank tiles come into it. If I change the tiles in the rack to "KEEPERS", I get the following results:

code:
Scrubble running in Flask (s):
0.4104149341583252
0.4107639789581299
0.3878779411315918
0.40062594413757324
0.41103696823120117
0.4136691093444824
0.4334678649902344
0.38866591453552246
0.40020203590393066
0.4542999267578125
0.4286069869995117

Just Scrubble:
0.4019341468811035
0.3897378444671631
0.39071011543273926
0.3904130458831787
0.40022897720336914
0.3827168941497803
0.387239933013916
0.38861608505249023
0.3991999626159668
0.3950839042663574
The way the algorithm handles blank tiles is to loop over every letter, trying that letter in it's place. Why is there such a difference in the web app when using a blank tile?

namaste friends
Sep 18, 2004

by Smythe
This is loving awesome.

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


I am drawing a blank on this stupid problem so I come for your help. I'm using the pysvn module and getting an error on a unicode file name. If I run svn by hand from the command line everything works fine but pysvn throws this error...

code:
Error converting entry in directory '/path/to/images/folder' to UTF-8
Can't convert string from native encoding to 'UTF-8':
image_?\195?\169b?\195?\169.gif
So I figure I need to set the locale somewhere so I added this to my script:

code:
locale.setlocale(locale.LC_ALL, '')
But that doesn't seem to affect the pysvn module. So where do I set this so pysvn will pick up my encoding? I can't find poo poo in the pysvn docs. Any ideas?

Adbot
ADBOT LOVES YOU

QuarkJets
Sep 8, 2008

ArcticZombie posted:

In the scrabble solver I'm working on, there's a big difference in the time it takes to compute moves when it's running by itself and when it's running as part of the Flask web app.

For example, given the following game:



I ran the computation 10 times both on it's own and as part of the Flask app. The Flask app is importing the same module, instantiating the same classes and calling the same methods as the standalone test. Calculating every legal move, these are the results:

code:
Scrubble running in Flask (s):
10.107224941253662
18.127454042434692
21.389935970306396
20.2083158493042
17.303011178970337
19.36518907546997
20.441646099090576
22.377758026123047
19.84891104698181
21.805835008621216

Just Scrubble (s):
8.892906904220581
8.91041898727417
8.889863014221191
8.8021240234375
8.782242059707642
8.919314861297607
8.81180715560913
8.797425985336304
9.098114967346191
9.285434007644653
It only seems to have such a large difference when blank tiles come into it. If I change the tiles in the rack to "KEEPERS", I get the following results:

code:
Scrubble running in Flask (s):
0.4104149341583252
0.4107639789581299
0.3878779411315918
0.40062594413757324
0.41103696823120117
0.4136691093444824
0.4334678649902344
0.38866591453552246
0.40020203590393066
0.4542999267578125
0.4286069869995117

Just Scrubble:
0.4019341468811035
0.3897378444671631
0.39071011543273926
0.3904130458831787
0.40022897720336914
0.3827168941497803
0.387239933013916
0.38861608505249023
0.3991999626159668
0.3950839042663574
The way the algorithm handles blank tiles is to loop over every letter, trying that letter in it's place. Why is there such a difference in the web app when using a blank tile?

Could it be that every time you update the blank tile to check a new letter this is invoking something unnecessary in Flask? It's kind of hard to be sure of anything without seeing any of your code

  • Locked thread