Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

Vanadium posted:

I am trying to decide whether perl or php's behaviour annoys me more than having to explicitly convert integers when I want to multiple them with doubles in haskell, help me out here.

Haskell's number types are always the worst design decision. Always.

Adbot
ADBOT LOVES YOU

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

shrughes posted:

Lies. Haskell's numeric library is actually quite reasonable. If you recognize that designing a numeric library for extensibility is insane anyway.

Yeah, it's OK as long as you don't want to create your own number types, because then you don't have to care about the interrelationships between the Num, Fractional, Integral, Real, RealFrac, RealFloat, and Floating classes and the fact that many members were scattered about almost at random between them. It's just kind of sad that the Haskell designers went through all the work of creating an intricate class hierarchy, and it's not even easy to build what should be simple extensions to the numeric types like vectors, matrices, or numbers that carry units.

Personally, I'm hoping something like numeric-prelude makes it into some HaskellPrime.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender
If you really want to use a monad transformer or something, it doesn't look much worse. It just performs a little weird because you're not automatically strict in modifications anymore and you no longer have a guarantee of mutable state, only the strong reason to believe that the compiler will choose that particular optimization; you can't guarantee mutable structures outside of ST or IO, because that's what they're for.

code:
import Control.Monad.State
import qualified Data.Map as M

withInitialState = flip evalStateT

main = withInitialState M.empty $ do
  modify $ M.insert "go" 0
  get >>= lift . print
  modify $ M.insert "back" 1
  get >>= lift . print
  modify $ M.insert "to" 2
  get >>= lift . print
  modify $ M.insert "gbs" 3
  get >>= lift . print

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

Janin posted:

This isn't an example of mutability; Control.Monad.State is actually just a wrapper around (\state -> (result, newState)) -style functions. IO and ST are the only types which support true mutability.

Exactly. The compiler may be able to optimize such accesses into mutation internally, but there's no guarantee it'll do so.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

KARMA! posted:

Mind telling us why?

A simplified answer:

For most hash constructions, you can amortize the costs of a changing suffix more readily than a changing prefix, meaning that you can take a dictionary of unsalted hashes for common passwords and cheaply transform it to a dictionary of suffix-salted hashes for common passwords, allowing more effective attacks against databases of accounts.

Please note that this is a broad discussion of one particular class of attack, and there are a whole lot of caveats attached to this description which I'm not going to go into here.

The answer you should keep in mind is more general: Cryptographic protocols are really incredibly hard to design well, and they tend to be extremely subtle and fragile, often for reasons that don't appear to make sense unless you have had extensive specialist education in the field. If you don't know exactly why every single decision was made in an existing protocol, you are not qualified to change the protocol. Please don't invent your own cryptographic protocols.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

Jabor posted:

It looks to me like the username is taking the place of a nonconstant prefix.

Oh, probably, but it's a goofy construction and usernames are lower-entropy and often longer than the random nonces you're supposed to use as salts which will have who-knows-what kind of effect on attackers trying to minimize costs because now they can potentially exploit having the whole first block of the hash being a combination of low-entropy terms; not to mention the possibility of collisions where user foo has password barqux and user foobar has password qux, which all means that the potential attack surface is larger in some really-hard-to-quantify manner.

There's just no good reason for using a non-standard protocol which seems like it might have the same security properties as the real thing but hasn't been studied when you can just not write your own cryptographic protocols.

Seriously, cryptographic protocols are really hard for lots of small, subtle, incredibly obscure reasons. It's very hard to get the education you need to be able to do a halfway-adequate job of reasoning about possible attacks, the education expires extremely quickly, and even the experts get it wrong a large proportion of the time. It can be fun to see if you can find all the problems in some homemade protocol, but even if you and your best friend and 5000 people on the internet think you found them all, you're simply wrong.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

Mustach posted:

Even worse, do they think that return is a function?

Clearly return is the continuation implicitly passed to the current function, and if C supported higher-order programming you'd be able to pass it as a parameter to other functions for continuation-passing-style programming. People who use parenthesis on return are just planning ahead to be future-compatible with FunctionalC.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

Kelson posted:

That's what I thought at first, except key_length is presumably 0 for the empty string. That means key_end = key + 0, which is really just key. Which skips the for(sptr = key;sptr < key;sptr++) loop?

If that weren't the case, the value would be whatever happened to be in the "invalid" memory block key pointed towards (which may change between runs).

Correct, except the memory block isn't invalid in that case, it's explicitly initialized to zero by the bzero call prior to the block.

The value Aleksei Vasiliev chose doesn't come from the code, it's arbitrary. It just happens to have the property that when XORed over itself in 16-byte chunks it becomes zero. The key "4re35na2aTaVasAy4re35na2aTaVasAy" is the same as the key "" is the same as the key "0123456789abcdef0123456789abcdef" is the same as the key "abcdefghijklmnopabcdefghijklmnop" is the same as the key "aaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbb" and so on and so on.

ShoulderDaemon fucked around with this message at 04:35 on Feb 24, 2011

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

Kelson posted:

If key_length is 0, the for loop never executes (no xor takes place in my_aes_create_key) which leaves rkey bzero'd.
Correct.

Kelson posted:

If mysql was retarded and key_length somehow weren't 0 in the empty-key instance above, it'd xor *sptr into rkey, which is "invalid" memory (sptr = key; sptr < key+key_length) not the bzero'd buffer (ptr = rkey; bzero(rkey, AES_KEY_LENGTH/8)).
This does not happen; key_length is zero in the empty-key instance.

Kelson posted:

I believe "4re35na2aTaVasAy" is xor'd into each 16-byte chunk; where is that value in the code above?
Incorrect. The "4re35na2aTaVasAy" is not an artifact of the code, it's just something Aleksei Vasiliev chose arbitrarily. "4re35na2aTaVasAy" is not special.

The only thing that's special about about the key that Aleksei Vasiliev chose is that it consists of a 16-character string, repeated an even number of times. Any such key will be equivalent to the all-zeroes key or the empty key.

Kelson posted:

I may certainly be missing something, I'm just asking what. It appears to be in code not shown in the link (such as within rijndaelKeySetupEnc(aes_key->rk, rkey, AES_KEY_LENGTH)). No?

Nowhere within any of the MySQL code or its support libraries will you find "4re35na2aTaVasAy". It's not a value that comes from the code. It's just a value that Aleksei Vasiliev picked to demonstrate the problem.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

Scaevolus posted:

That's not how aes_encrypt works, though. It expects to get a (random) 128-bit string.

It should probably be noted in the documentation, but I don't see how this is a horror.

It should raise an error if it's handed a key that's too long or too short, rather than use poor key compression. The problem is that by allowing long keys, it encourages misuse (such as inputting an English passphrase) and compounds that problem by using a key compression technique which may significantly reduce entropy in highly structural keys (such as, for example, an English passphrase).

If you're going to do key compression, there's essentially no case where the algorithm MySQL uses is the right choice. The coding horror is that it tries, in a completely braindead and unsafe manner, to correct for a programmer error instead of just failing.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender
OCaml will be easier to learn than Haskell if you are coming from a strict language background.

You should still learn both, though; Haskell is fun.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

rjmccall posted:

They're still well short of Turing complete.

You can embed arbitrary Perl expressions in a Perl regexp.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

yaoi prophet posted:

What's that from? It's pretty obviously Haskell, I'm guessing it's some kind of funky metaprogramming stuff?

Looks like part of some Template Haskell for some option-parsing library; that's producing a Haskell expression at compile-time by direct manipulation of syntax trees to serve some purpose. Template Haskell is pretty ugly to begin with, and the indentation is not helping at all, there.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

Janin posted:

yup

It's part of an option parsing library I'm writing, which lets users define options using templates.

the end result is pretty and nice to use, but the innards :gonk: . This is my first time ever using template haskell, and I feel like some CS101 freshman confronted with a switch statement.

Use AppE in `infix` position if you have to use it at all and it tends to get a bit more readable. If you're doing this a lot, just make an operator for it.

You don't usually need to do things like ListE (map (LitE . StringL) longs) when you can just [|longs|]. I try to avoid LitE altogether. In general, TH looks a lot nicer if you use the quoters to do as much heavy lifting as possible. Remember that you can use Template Haskell in your Template Haskell.

code:
  let qValExp = fromMaybe [| const valid |] $ lookup fname valDecls
  bindExp <- [| $(return var_optionParse) shorts longs def $(qParserExp) $(qValExp) |]
  return $ BindS (VarP varName) bindExp

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

TRex EaterofCars posted:

I will defer to those who know better but isn't the Maybe monad what's doing all the magic there?

The Maybe type is indeed what is in play there, but the fact that Maybe happens to be a Monad isn't relevant. The instance of Functor for Maybe (a much simpler categorical structure) is all that's required for that code.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

Hughlander posted:

Personally I think GPL v3 has pushed a lot of people into a more MIT/BSD/Apache style license. I used to release my code under GPL v2, but now I tend to use MIT instead.

I don't quite get this. If GPLv2 does what you want, then why don't you release software under GPLv2?

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

Internet Janitor posted:

What I'm saying is, why are you so concerned about forcing other people to open-source their derivative works? What if a software engineer working for Oracle studied some code in Clang and then used the knowledge he learned to improve proprietary software without directly stealing a single line? Isn't that "profiting without giving anything back"?

I would argue that's not a derivative work under US copyright law. Fair use allows studying or works "inspired" by other works; if there's no direct copying, there isn't derivation. (IANAL, This Is Not Legal Advice)

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

Internet Janitor posted:

best of all, placing their works in the public domain.

Please never ever ever encourage someone to place code in the public domain. Please don't do it yourself. The WTFPL is enormously better than public domain declarations.

Why? Because in a lot of countries, it is actually impossible for an author to release their own work into the public domain. Or it requires a specific registration, which may cost money. Or it requires auditable documentation of everyone who may have ever contributed to the work. Or it only applies to people who received the work by physical mail. Or numerous other crazy restrictions.

And that means that in a lot of the world, you just can't use this supposedly "public domain" software. Any license is better than public domain declarations.

Yes, I have run into problems from this on real-world projects, and I've had to harass some poor coder into agreeing to give me their software under the MIT license instead of "public domain" which just wasted a few hours of their and my time so that I won't get attacked by a lawyer.

I know it seems like "public domain" should be the least-restrictive thing you can do, but it's really not. Because of the specific legalities involved in that phrase, it's a huge tarpit that is better off avoided.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

Suspicious Dish posted:

Verify against the reference implementation?

PBKDF2 is described as agnostic to the underlying cryptographic primitives, as well as being parametric in iteration count and key length. This means that there may not be a "reference implementation" easily available for the particular combination of parameters chosen for your implementation. It is much harder for amateurs to verify correct implementations of PBKDF2 than it has any right to be.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender
I think the XML-ish completely unhelpful response to that complaint is that in <foo>12345</foo>, the 12345 is obviously of type foo, and you have to know the context of the message if you want a more concrete interpretation.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

Suspicious Dish posted:

Right, so the type data is out of band, which means that the logic that can be implemented by you by a generic XML parser becomes less and less, which means that "generic toolkits" invent their own hacks because people are lazy.

Yeah, it's kind of shameful that so many supposedly "generic" intermediate layers are prone to simply throwing out information and relying on the most ambiguous representations they can possibly get away with.

Horror: I have actually written modern code (in the last 2 years) for a new application that used Sun RPC to communicate. I was working in Haskell.

Horror2: It was pretty much the best experience I've ever had writing IPC code, and I have little doubt that time will show it to be the most reliable IPC code I've written.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

Ensign Expendable posted:

That is weird. Why not 255?

Trailing dot + 3-character extension + NUL byte.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

Manslaughter posted:

Not the same problem, but this reminded me of when I found a if (!(!(object))) once. :psyduck:

The !!expr construct is a somewhat common pattern in a few languages; you can think of !! as the cast-to-bool operator, because it normalizes the following expression to be exactly true or false, which is sometimes desirable.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

qntm posted:

Perl even has unless { ... } else { ... } although thankfully there is no elsunless. Yet.

Well, there's unless ( ... ) { ... } elsif ( ... ) { ... } else { ... } which is arguably worse because now we're mixing positive and negated conditionals.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

yaoi prophet posted:

You shouldn't operate + for strings because strings don't form a group under concatenation :colbert:

They totally can, though, if you introduce strings where every element can either be positive or negative. If a positive character is next to a negative character with the same character value, then the two cancel out and the string gets shorter. Such strings are a trivial generalization of "normal" (or "strictly positive") strings, and readily form a group under concatenation.

They also let you unprint stuff, or untransmit things from the network. Obviously a useful feature, hopefully we'll see these in C++14.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender
I might argue that if a type has a well-known cast-to-bool operator with very "expected" semantics for that type, then it should not also have overloads for the short-circuiting operators, because when reading code that used them I would be expecting it to act like a bool and might be surprised that my control flow isn't what I expect.

That said, I don't see anything wrong with using operator overloads for && and || if your type doesn't pretend to be a bool at all. The confusion goes away completely.

Also, now I want C++14 to give me something like "int &&&i" as a parameter which produces a thunk for lazy evaluation.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender
Just use nickle instead, honestly.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender
Here's some code I just wrote that I'm ashamed of.

The idea is, I have a variadic template that I'm using in a few places. I need to write a function which takes as parameters a number of strings equal to the number of types I'm passing to this variadic template. As near as I can tell, expanding a parameter pack while ignoring its actual expansion is not trivial.

C++ code:
template<typename C, typename X>
struct Constant {
  typedef C type;
};

template<typename ...dataType>
void set_dataType_names(typename Constant<std::string const &, dataType>::type... name) {
  // In here I can use name, which is a parameter pack the same size as dataType,
  // but consists entirely of std::string references.
}

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

Plorkyeran posted:

Trivially better:
C++ code:
template<typename C, typename _> using Constant = C;

Thanks, my code is now less ugly!

Plorkyeran posted:

set_dataType_names is a pretty horrifying name.

The actual name in the code is newEventType, and it takes a few other parameters; I just wanted to make the usecase in the code sample somewhat more comprehensible.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

Tesseraction posted:

Okay, but how does changing "x > y" to "x != y" not count as a fundamental change in the behaviour of a program?

Over the set of values for which the program behaviour is defined, in this context those two operations are the same. For all other values, because the behaviour isn't defined anyway, it doesn't matter which operation you choose. There is no situation where the behaviour both is defined, and differs between the two operations, so there is no fundamental change. Hence, a legal optimization.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

Steve French posted:

Which would be awfully nice for the compiler to know, but it doesn't.

This isn't how undefined assumptions work in compilers. The compiler does know that, because if you didn't pass a large enough array into the function, then the result would be undefined, and the compiler is allowed to assume that your program is well-defined. Compilers don't check to make sure that a program is well-defined, they simply assume that it is and perform optimizations within that context, because it doesn't matter if the optimizations break not-well-defined programs; those programs were already undefined according to spec anyway.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

v1nce posted:

Note that heavy calls to /dev/urandom will expire the cache of available random values, and can result in a pause while more are generated.

urandom is the one that never blocks. random is the one that arbitrarily blocks when you use it heavily for not-very-good reasons.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

LeftistMuslimObama posted:

Forgive me if this is a dumb question, but I don't really get the point of function-like macros anyway. Is it just that they're inlined so you don't add a level to the stack for small computations? It seems like most of the time that level of optimization would be unnecessary, and if I'm doing that calculation often enough that I need a shortcut for it, I'd rather make it a function in a header file that I can use in other source files or projects.

One "nice" thing about macros is that they can be non-syntactic. In our codebase at work, we have macros along the lines of:
C++ code:
#define ASSERT(c,e,m) do if (! (e)) {                    \
  ::std::ostringstream _msg;                             \
  _msg << __FILE__ ":" << __LINE__ << ": " #c ": " << m; \
  throw ::ducky::Assert##c(_msg.str());                  \
} while (false)
This lets you write assertions like:
C++ code:
ASSERT(ContractViolation, foo == bar, "Expected " << bar << " but got " << foo << " from " << dump_state());
This is doing a few neat things:
  • The different types of assertion exceptions are all handled by the same macro, using token pasting.
  • The assertion messages have filenames and line numbers, which you can't get if you generate them with a real function.
  • The non-syntactic message parameter allows us to build complex assertion messages at runtime without having to explicitly create temporaries at assertion sites.
  • The non-syntactic message parameter is evaluated only if the assertion fires, so if e.g. dump_state() is very slow, it will not be called in the normal case.

It's not like we couldn't work without it, but it makes some parts of the codebase a lot nicer to deal with.

That said, a lot of the "obvious" uses of macros are either terrible poo poo nobody should want to do, or are doable with just a little more work using either normal functions or templates. Macros have their place, mostly in reducing code that you would otherwise be forced to manually duplicate at every usage site.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender
I once shipped some code that connected to a postgres database through a handle named mySQL.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

TooMuchAbstraction posted:

For marketing purposes, I fail to see why a slightly-imperfect filter is so awful. False negatives (valid addresses marked invalid) will not hear from us; false positives ("wrong numbers") will either bounce or accidentally contact the wrong person. But either way as long as the failure rate is low, we'll end up with a collection of addresses that are largely valid, which is a hell of a lot better than a "try the address and see if it delivers" approach.

I...don't really get why this is controversial?

I think most of us are assuming that you are vastly overestimating the cost of a false positive. Email bounces and misdeliveries are cheap. Your marketing coverage will be better (zero percent false negative rate!) if you just don't do any validation, and your costs will be very similar - there's no downside, a nontrivial potential upside (assuming you believe your marketing works), and it's less work for you!

Fundamentally, the validity of an email address carries no meaningful information up until the point at which you are actually about to send an email; at that point, the easiest and most accurate way to determine validity is to simply send the email. Doing anything else just means that you'll piss off people who's email addresses are incorrectly rejected, at little-to-zero gain.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

Dr. Stab posted:

Oh, sure, obviously. Now what is the regex to simulate a turing machine?

In Perl, $foo =~ s/.*/$&/ee; is the same as $foo = eval $foo;.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender
code:

Works in quite a few different languages.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

JawnV6 posted:

Now that I say that, I'm not really sure if you can get an interrupt in the middle of a macro-fused pair of instructions. Gosh, if only there were a micro-architect around to help!

I just skimmed through the simulator, and as near as I can tell, the Core microarchitecture never interrupts in between macro-fused ops. External interrupts are not guaranteed to arrive at the end of any particular instruction, just at the end of some instruction (or at a few other points, e.g. sync points within rep movs), so an external interrupt is simply deferred to the end of the pair. There are a few architecturally-visible interrupts (in/out instructions, disabling/enabling interrupts, hardware breakpoints) that could in theory happen on the first macro-op of a pair, but we take care to simply not fuse ops that are vulnerable to those.

Signed, a microarchitect.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

JawnV6 posted:

Ok, interrupt deferral makes sense. But you can still fault after the cmp and before the second op though? Put the jne on a different page requiring a fault or better yet straddling the canonical boundary. Everyone gets that edge wrong. I imagine those situations will never get fused though, the FE will recognize it needs to issue the fault before the decoder ever sees the pair.
Yeah, you're probably going to fuse only when you have actual instruction bytes available, or something equivalent (like you're getting fused ops from a stream cache). If you have to stall to fetch bytes for instruction 2, you're going to issue instruction 1 immediately instead of waiting to fuse it. Fusion is an opportunistic optimization that helps, but isn't essential; in practice you can get away with a conservative approach of only fusing when it's easy to prove safety and still see enough reasonable performance gains to justify the effort.

Making the fusing logic very simple and conservative also means that it takes less power, less die area, and is easier to do timing for. It wouldn't be shocking to see rules like "we only fuse if one of the instructions is a MOV and the other instruction is a non-memory-op and both instructions are coming out of the stream cache" still being good enough to see 1% improvement on some benchmark that some client cares about. If you can get a 1% improvement for what might as well be free then you're going to take it, especially if it's the sort of thing that you can potentially tune and improve in future generations to be even better.

JawnV6 posted:

There's no good way to fuse up a zero-length call though. The STA/STD pair required for the call's implicit push getting fused and not offering an instruction boundary between them, if the implicit stack location had to be paged in just to be marked dirty, etc. Too much going on there.
Off the top of my head, I can't think of any way that you'd be able to win by fusing a CALL/POP pair outside of something esoteric like binary translation where you're dynamically recompiling the program stream in large blocks. You're just going to special case CALL 0 in your call/return predictor, and otherwise not do anything special. The POP immediately following the CALL will get its result out of whatever store-forwarding or memory-renaming structures you have, and it should be no more or less performant than any other random top-of-stack manipulation. This is one of those things that is certainly kind of weird, but it's similar enough to all the other weird crap that you have to deal with all the time that your existing microarchitecture should handle it fine.

Adbot
ADBOT LOVES YOU

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

JawnV6 posted:

This is where it's sorta obvious my info is 5+ years out of date, I didn't know MOV was viable for any fusing. Register renaming makes some instructions quite light, but I thought that mechanism was separate from fusing. Stream cache also implies all goofy page/memory boundaries aren't going to be relevant.

JawnV6 posted:

Hmph. Binary translation shouldn't be that esoteric. C'mon, get things in gear over there.
Please interpret my examples as demonstrative of a general idea and not as "here is what current Core microarchitecture actually does". I can't possibly disclose actual fusion rules from our microarchitectures. Similarly, please understand "esoteric" to be within the context of "weirder than what most people think of as a normal CPU" and not "weirder than what Core microarchitectures may or may not genuinely do".

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply