Ontopic Random Computer-Electronics Thread

Took a NodeMCU board running Konnected.IO and paired it with a simple relay.

When power goes out on the grid side of our system, I can use this to trigger stuff on the battery side to turn off automatically that doesn't really need to be on.


200109951IMG0341.jpg
 
  • Gravy
  • Haha
Reactions: fly and wetwillie
Last edited:
"Object Oriented Programming" should become a punchline like "The Aristocrats!", given what I did with CUDA today.

CUDA is NVidia's library that supposedly enables general purpose computing on GPUs (NVidia GPUs, obv). The problem is that they didn't bother to make it work with all the std::<types>. That's a problem for me, because a lot of the work I do involves computation on std::complex, which is _particularly_ unsupported. They did their half-assed approach, by making a cuda complex class, but the library I work on is templated to allow users to choose their own ordinals/node/scalar types.

So, I had to implement an interceptor class to deal with the magic conversions between std::complex and cuComplex types.

What I don't get is how it compiled before - everything compiled down to bytecode fine, but when you ran it on a CUDA node, the unit tests would explode in a fiery ball of floating point errors.
 
"Object Oriented Programming" should become a punchline like "The Aristocrats!", given what I did with CUDA today.

CUDA is NVidia's library that supposedly enables general purpose computing on GPUs (NVidia GPUs, obv). The problem is that they didn't bother to make it work with all the std::<types>. That's a problem for me, because a lot of the work I do involves computation on std::complex, which is _particularly_ unsupported. They did their half-assed approach, by making a cuda complex class, but the library I work on is templated to allow users to choose their own ordinals/node/scalar types.

So, I had to implement an interceptor class to deal with the magic conversions between std::complex and cuComplex types.

What I don't get is how it compiled before - everything compiled down to bytecode fine, but when you ran it on a CUDA node, the unit tests would explode in a fiery ball of floating point errors.

All i just read is "i dont know how to program CUDA"
 
or "i dont know what CUDA is"

Cause it aint just "im gonna port my x86 code straight to cuda and have it work"