For reference: https://github.com/leejo/CGI.pm/issues/109

]]>It will actually make life easier for anyone who's grabbing the distname from the metadata, then using it to look the dist up on various systems. Currently such an approach will get a distname of 'CGI.pm', which it won't then find on various core services.

It was me who raised the ticket :-)

]]>On Saturday, maybe? ;-)

]]>]]>

Time::Moment can never be "backwards" comparability with DateTime.pm due to DateTime.pm's design.

These aren't huge numbers, but from the Math::Prime::Util documentation:

is_prime from 10^100 to 10^100 + 0.2M 2.2s Math::Prime::Util (BPSW + 1 random M-R) 2.7s Math::Pari w/2.3.5 (BPSW) 13.0s Math::Primality (BPSW) 35.2s Math::Pari (10 random M-R) 38.6s Math::Prime::Util w/o GMP (BPSW) 70.7s Math::Prime::Util (n-1 or ECPP proof) 102.9s Math::Pari w/2.3.5 (APR-CL proof)

Math::Prime::Util with the GMP backend will support hundreds of thousands of digits, and is probably the fastest code for large numbers other than OpenPFGW's Fermat test, and is substantially faster than any of the other Perl modules. See this stackexchange challenge, or Nicely's list of first occurrence prime gaps where I used this module.

Caveat being that without Math::Prime::Util::GMP installed, it uses Math::BigInt (with GMP or Pari backend), which is super slow. My todo list has some sort of replacement to get a bigint solution that is both (1) portable assuming XS, and (2) reasonably fast. Also, there are some nice optimizations for x86_64 as well as 64-bit in general. It is still fast on non-x86 machines, but it will miss some of the better optimizations (asm mulmod, montgomery math).

Math::Pari, Math::GMP, Math::GMPz, and Math::Primality will support bigints pretty well. For the two GMP methods you'll have to decide how many tests to use. Math::Pari really needs to be updated to use a newer Pari by default -- the current version will do 10 M-R tests and is quite a bit slower than when built with Pari 2.3.5.

Math::Prime::XS does not support bigints. For 64-bit primes it is about 3-4 million times slower than MPU on my machine (but should be fast for most composites).

Math::Prime::FastSieve is going to eat a lot of memory and time making the sieve once we're past 10^8 or so. The answers are fast once done, but it's not the best solution. It took me 2 minutes to sieve to 10^10, and beyond that will take GB of memory.

Trial division is exponential time so even with C+GMP is not going to be practical past 25-30 digits (and is hideously slow at those sizes). The Perl code is just going to get worse.

Time for primality proofs is another discussion -- I'm writing some web pages on that since I realized I keep writing the same thing on forums.

For the largest known primes, we'd want to use a Lucas-Lehmer test since they are Mersenne primes. I have not added any special form tests (nor have the other modules), but the LL test is pretty straightforward. They would still take a long time. The largest currently known prime has 17,425,170 digits. Using code specifically made for this, it took 6 days on a 32-core server and 3.6 days on a GPU.

For a general form numbers, last year some people ran tests on a couple Wagstaff PRPs with ~4 million digits. OpenPFGW took 4-70 hours to show they were Fermat PRPs, and 5 days for the Lucas test. A fast Frobenius test implemented with GMP took slightly over one month.

]]>132.1 Perl trial division mod 6

291.7 Perl trial division

9.8 Math::Prime::Util

2.5 Math::Prime::Util with precalc

6.7 Math::Prime::XS

On this machine, Math::Prime::XS's simple trial division loop is faster than the non-cached routine I use in MPU until 3e7. Part of this is that MPU uses UV internally while MPXS uses "unsigned long". On this machine UV is "unsigned long long" (64-bit) and unsigned long is only 32-bit. That means MPXS is 32-bit, so doesn't work past 2^32 and probably explains the speed difference as well.

]]>The first category are the Damians of our community. These are people whose attendance at any conference provides an immediate and obvious benefit every time you'd invite them.

People like me and ribasushi are a different category. While I do think we both make a positive contribution to a conference, it's not anything near the value that someone like Damian would provide. Instead, I believe our crowdsourcing success is based on community goodwill. We do a lot of open source work that we aren't getting paid for. There are a lot more people who would be sponsored if they tried, but unlike Damian I don't think they/we could get sponsored time-after-time in that fashion, exactly because next year there will be someone else wanting to do the same.

]]>