I wrote some code that multiplies matrices size 10x10 through 1000x1000. It took approx 2021 seconds to run. By my math it did at least a total of 260 billion operations coming out to about 129 million per second. Yet the theoretical speed of 1 core of the processor it ran on (x5550) is 10Gflops. Does Ram/OS/Background task really cause a program to only use 2% of the theoretical speed??
Copyright © 2024 Q2A.ES - All rights reserved.
Answers & Comments
Verified answer
the only possible answer is that even if a processor were able to process such extensive algorythms it can simulate the viewpoint but no physical distinguition so it was a mirror 10 gflops compared to aprocessor of 17 tflops wich would be needed till the point it woul d need to be duplicated each and every per opertation,