Solution | Impl | Order anti-lex |
Order lexico |
Restrict count |
Restrict size |
Max in 10s | Count in 10s |
---|---|---|---|---|---|---|---|

ntheory 0.45 | XS | yes | no | yes | yes | 87 | 223,000 |

ntheory 0.45 | Perl | yes | no | yes | yes | 72 | 7,300 |

Integer::Partition 0.05 | Perl | yes | yes | no | no | 67 | - |

(unreleased, from Limbic 2004) |
Perl | no | yes | no | no | 62 | 6,000 |

MJD 2013 | Perl | no | no | no | no | 71 | - |

blokhead 2007 | Perl | yes | no | no | no | 63 | - |

kvale 2004 | Perl | yes | no | no | no | 62 | - |

sfink 2004 | Perl | yes | no | no | no | 58 | - |

tye 2001 | Perl | no | no | no | no | 58 | - |

(golfed, 73 chrs) |
Perl | no (73) yes(90) |
no | no | no | 21 | - |

Pari/GP 2.8.0 (not a Perl module!) |
C/Pari | no | no | yes | yes | 100 | 34,000,000 |

For counting, the fastest solutions use the Hardy-Ramanujan-Rademacher formula. The state of the art is Johansson's ARB, which is thousands of times faster than Pari. Pari also uses the Rademacher formula and is quite fast. Jonathan Bober has GPL code using GMP and MPFR that is a little faster than Pari, but MPFR isn't installed on most platforms (meaning it's hard to incorporate into a portable library). I'm using a triangle solution, which isn't too bad in C+GMP compared to Perl's bigints, but way off the fast formula. Integer::Partitions doesn't have a counting method.

ntheory (aka Math::Prime::Util) has a function taking a block (aka an anonymous sub), the number, and an optional hash of restriction parameters. The block is called for each partition with the argument list set to the partition. The restriction parameters are similar to Pari/GP's, with min/max count and min/max element size. This can save quite a bit of time doing filtering (sometimes with shortcuts) inside the XS code.

I debated call by value (a new argument list for each call) vs. call by reference (giving the caller access to internal data). Some XS iterators, e.g. Algorithm::Permute's fast permute, do the latter, and it is faster. I decided on the former because I like the idea that the caller can manipulate its arguments as desired without worrying about segfaults, incorrect iteration, infinite iteration, etc.

Typically the XS code would be used, but there are also pure Perl
implementations for everything. They are used if the XS Loader fails, the
environment variable `MPU_NO_XS` exists and is true, or in cases
where the arguments would overflow or are not properly parseable.

```
use ntheory qw/forpart partitions/;
forpart { say "@_"; } 8; # All partitions of 8
forpart { say "@_"; } 10,{nmin=>5,amax=>4}; # Only 5+ parts, all <= 4
say partitions(2000); # Counting
```

The Integer::Partition module has been on CPAN for a number of years, and is the only solution giving both lexicographic and anti-lexicographic ordering choices. It's reasonably fast unless you need larger values with restrictions, or want counts.

```
use Integer::Partition;
my $iter = Integer::Partition->new(8);
while (my $p = $iter->next) {
say "@$p";
}
```

Math::Pari
isn't in the table because it builds with Pari 2.1.7 by default.
`numbpart` (partition count) was added in version 2.2.5 (Nov 2003), and
`forpart` was added in 2.6.1 (Sep 2013). It's possible to build
Math::Pari with version 2.3.5 so we could get `numbpart` but not
`forpart`.

Pari's `forpart` is quite fast, and has some nice optimizations
for restrictions as well. The ordering is by number of elements rather than
a lexicographic ordering.
The only way to use this from Perl would be a system call to gp.

There are also golfed solutions in under 100 characters. We can even add a callback sub and anti-lexicographic sort and still come in at 93 characters. As usual with heavily golfed code, these are quite slow, managing only 21 partitions in under 10 seconds. This also uses an internal hash for the partitions, which means memory use will grow (though time grows faster so this isn't really an issue).

Here is my simple modification to the golfed solutions, taking 90 characters

for integer partitions in anti-lexicographic order with a callback sub. It's

very slow however, so just for fun.

sub p{my($s,@e,$o,@f)=@_;@f=sort{$b<=>$a}@e;$_{"@f"}||=$s->(@f);p($s,++$o,@e)while--$e[0]}

p(5);

]]>

Using the 'for $in -> $x { ... }' style was going quite slow, but the helpful people on #perl6 got be to try .get in a loop, e.g. 'while (my $x = $in.get) { ... }' which turns out to be much faster. Not only does it use almost no memory, it's 50% faster than the latest @d = "file".IO.lines.

BTW, this was meant to share my experience with using .get for reading a large file. Big thanks to Liz and others for speeding up Str.lines!

]]>132.1 Perl trial division mod 6

291.7 Perl trial division

9.8 Math::Prime::Util

2.5 Math::Prime::Util with precalc

6.7 Math::Prime::XS

On this machine, Math::Prime::XS's simple trial division loop is faster than the non-cached routine I use in MPU until 3e7. Part of this is that MPU uses UV internally while MPXS uses "unsigned long". On this machine UV is "unsigned long long" (64-bit) and unsigned long is only 32-bit. That means MPXS is 32-bit, so doesn't work past 2^32 and probably explains the speed difference as well.

]]>These aren't huge numbers, but from the Math::Prime::Util documentation:

is_prime from 10^100 to 10^100 + 0.2M 2.2s Math::Prime::Util (BPSW + 1 random M-R) 2.7s Math::Pari w/2.3.5 (BPSW) 13.0s Math::Primality (BPSW) 35.2s Math::Pari (10 random M-R) 38.6s Math::Prime::Util w/o GMP (BPSW) 70.7s Math::Prime::Util (n-1 or ECPP proof) 102.9s Math::Pari w/2.3.5 (APR-CL proof)

Math::Prime::Util with the GMP backend will support hundreds of thousands of digits, and is probably the fastest code for large numbers other than OpenPFGW's Fermat test, and is substantially faster than any of the other Perl modules. See this stackexchange challenge, or Nicely's list of first occurrence prime gaps where I used this module.

Caveat being that without Math::Prime::Util::GMP installed, it uses Math::BigInt (with GMP or Pari backend), which is super slow. My todo list has some sort of replacement to get a bigint solution that is both (1) portable assuming XS, and (2) reasonably fast. Also, there are some nice optimizations for x86_64 as well as 64-bit in general. It is still fast on non-x86 machines, but it will miss some of the better optimizations (asm mulmod, montgomery math).

Math::Pari, Math::GMP, Math::GMPz, and Math::Primality will support bigints pretty well. For the two GMP methods you'll have to decide how many tests to use. Math::Pari really needs to be updated to use a newer Pari by default -- the current version will do 10 M-R tests and is quite a bit slower than when built with Pari 2.3.5.

Math::Prime::XS does not support bigints. For 64-bit primes it is about 3-4 million times slower than MPU on my machine (but should be fast for most composites).

Math::Prime::FastSieve is going to eat a lot of memory and time making the sieve once we're past 10^8 or so. The answers are fast once done, but it's not the best solution. It took me 2 minutes to sieve to 10^10, and beyond that will take GB of memory.

Trial division is exponential time so even with C+GMP is not going to be practical past 25-30 digits (and is hideously slow at those sizes). The Perl code is just going to get worse.

Time for primality proofs is another discussion -- I'm writing some web pages on that since I realized I keep writing the same thing on forums.

For the largest known primes, we'd want to use a Lucas-Lehmer test since they are Mersenne primes. I have not added any special form tests (nor have the other modules), but the LL test is pretty straightforward. They would still take a long time. The largest currently known prime has 17,425,170 digits. Using code specifically made for this, it took 6 days on a 32-core server and 3.6 days on a GPU.

For a general form numbers, last year some people ran tests on a couple Wagstaff PRPs with ~4 million digits. OpenPFGW took 4-70 hours to show they were Fermat PRPs, and 5 days for the Lucas test. A fast Frobenius test implemented with GMP took slightly over one month.

]]>

Memory | Time | Solution |
---|---|---|

2096k | 72.6s | Perl trial division mod 6 |

2100k | 124.8s | Perl trial division |

3652k | 36.2s | Math::GMP |

3940k | 14.8s | Math::GMPz |

4040k | 1.9s | Math::Prime::Util (no GMP backend) |

4388k | 1.9s | Math::Prime::Util (with GMP backend) |

4568k | 1.4s | Math::Prime::Util (GMP + precalc) |

4888k | 4.4s | Math::Prime::XS |

5316k | 245.1s | Math::Primality |

5492k | 29.8s | Math::Pari |

6260k | 1.5s | Math::Prime::FastSieve |

~16MB | >1 year | regex |

Times are with perl 5.20.0 on an Ubuntu Core2 E7500 machine. I used `/usr/bin/time perl -E '`*...*`'` to get the time and memory use. With this system just starting up Perl on the command line takes about 2MB.

The first two entries are simple Perl trial division routines:

`# mod-6 wheel`

sub isprime { my($n) = @_; return ($n >=2) if $n < 4; return if ($n%2 == 0) || ($n%3 == 0); my $sn=int(sqrt($n)); for (my $i = 5; $i <= $sn; $i += 6) { return unless $n % $i && $n % ($i+2); } 1; }

$s += isprime($_) for 1..1e7; say $s;

```
# Standard method, from RosettaCode
sub isprime { my($n) = @_; $n % $_ or return for 2 .. sqrt $n; $n > 1; }
$s += isprime($_) for 1..1e7; say $s;
```

These have essentially no memory use, but are pretty slow especially as the input increases. However, very low memory and gets the job done for small inputs.

The modules Math::GMP and Math::GMPz have calls to GMP's mpz_probab_prime_p function. They are the lowest memory of the module solutions by a small margin, but not the fastest. They shouldn't slow down much with larger inputs.

Math::GMPz has a bit clunkier interface but is faster and exports the entire GMP integer API (which makes it use a little more memory to load). The speed will differ based on the number of tests: only 3 are required for this size, but we must have at minimum 11 for 64-bit inputs and probably more to avoid false results. Math::GMP has much more object overhead though I believe this is being worked on.

Like some other modules, the result of the primality test is either 2 (definitely prime), 1 (probably prime), or 0 (definitely composite). Using the double negation is an easy and fast way to make the result either 1 or 0.

`use Math::GMP; $i=Math::GMP->new(0); for (1..1e7) { $i++; $s += !!$i->probab_prime(15) } ; say $s;`

`use Math::GMPz; $i=Math::GMPz->new(0); for (1..1e7) { $i++; $s += !!Math::GMPz::Rmpz_probab_prime_p($i,15) }; say $s`

Math::Prime::Util is my number theory module. After working on cutting down memory use it's reasonably small even with many exportable functions. It is the fastest solution as well.

By default it will load the GMP back-end if available. This uses a little more memory, but can be turned off either by not having it installed or setting the environment variable `MPU_NO_GMP` to a non-zero value.

`use Math::Prime::Util "is_prime"; $s += !!is_prime($_) for 1..1e7; say $s;`

To match the behavior of Math::Prime::FastSieve, we can precalculate the primes, making `is_prime` a simple bit-set lookup.

`use Math::Prime::Util qw/is_prime prime_precalc/; prime_precalc(1e7); $s += !!is_prime($_) for 1..1e7; say $s;`

Math::Prime::XS does mod-6 trial division in C. It's fast for small inputs, but as expected from an exponential-time algorithm, will slow down a lot with large inputs. It uses more memory than I'd expect.

`use Math::Prime::XS "is_prime"; $s += is_prime($_) for 1..1e7; say $s;`

Math::Primality implements the BPSW algorithm in Perl using Math::GMPz. It really is better suited for bigint inputs, being both slow and memory intensive for this simple task.

`use Math::Primality "is_prime"; $s += !!is_prime($_) for 1..1e7; say $s;`

Math::Pari is the Perl interface to the old Pari 2.1.7 library. It has lots of functionality, but does take up almost 1.5MB more startup memory. While PARI/GP is reasonably efficient (about 4 seconds), the module returns the boolean result as a Math::Pari object which sucks up lots of time. The double negation is a little faster than directly summing the result.

`use Math::Pari qw/isprime/; $s += !!isprime($_) for 1..1e7; say $s;`

Math::Prime::FastSieve takes a little different approach. It's written using Inline::CPP and sieves a range into a bit vector, after which operations (such as `isprime`) can be efficiently performed. That does limit its range, but the time shows it's quite fast at this operation.

`use Math::Prime::FastSieve; my $sieve = Math::Prime::FastSieve::Sieve->new(1e7); $s += $sieve->isprime($_) for 1..1e7; say $s;`

Lastly we come to Abigail's regex. Very popular for code golfing and showing awesome regex hacks, it occasionally is seen as a practical recommendation from people who have clearly never used it for non-toy-size inputs. It's very cool. It's also not practical for larger inputs. For this task it took 6.8 seconds and 2684k for the first 10k, but 2607 seconds and 7144k for isprime for the first 100k integers. It takes over 1 minute to verify 999,983 is prime, and for 9,999,991 I killed it after 40 minutes. Hence my estimate of over a year to finish the sum-to-10M example.

`sub isprime { ('1' x shift) !~ /^1?$|^(11+?)\1+$/ } $s += isprime($_) for 1..1e7; say $s`

Conclusions

If you're using Moose or a long-running process, the memory use for any of the reasonable solutions here probably doesn't matter -- use what is easy and fast. For command-line programs or processes that are spun up just for a single task, the memory use can matter. My module was hitting 9MB before I finally had enough and reduced it substantially (a big chunk of that was by having functions go straight to XS and load up the thousands of lines of PP only if required). Even for such a simple task as this we can see sizes ranging from 2MB to 6MB, with over 2MB difference even between modules.

Another subject that is important, especially for making utility scripts, is startup time. This task did not measure that, but it can also be a bottleneck especially if comparing vs. standalone C programs that have essentially no startup cost.

]]>Math::Pari probably needs a co-maintainer with lots of time. So far I don't think anyone qualified has wanted to step up. It's a lot of work. On the plus side, the RT situation isn't quite so bad -- it would look a lot better with some pruning of duplicates and closing of fixed issues. There are a lot of issues that look like they're fixed but just haven't been closed.

This leads to the digression of how it would be nice to wean the remainder of the Perl crypto modules off of Math::Pari, but they're often in the same boat. The authors are around but don't have time to work on the modules and aren't ready to give them up. There's also the issue of the alternatives: Math::BigInt is core and portable, but super slow for this work without one of the backends, and the backends also have long-standing critical bugs. Math::GMP or Math::GMPz would be the obvious and best choices, but then we're requiring platforms to have GMP installed. I'm still trying to find time to get an alternative out, but it won't be ready in time for CPAN day.

]]>In many cases what I found was that often just using string storage was faster than Bit::Vector, merely because Perl optimizes the heck out of things like substr. Once the vector grows large (e.g. for Unary codes) then Bit::Vector is better. Using 32-bit vectors with bit twiddling in Perl was pretty close to Bit::Vector's speed for my operations. Of course it will differ based on your operations.

Using an XS back end for the bit manipulation results in ~70x speedup for this application (and another 2x speedup if I go straight to XS and skip the Perl Moo class entirely, but then you give up some extensibility).

]]>One thing I ended up adding to my test suite for one package was an "examples.t" file, which contains tests for all the examples in the synopsis and throughout the documentation (including the small examples per method and the EXAMPLES section). This compares the expected output, which Test::Synopsis doesn't have. I wanted to make sure I caught any regressions in anything I used in the documentation because that would be particularly embarrassing. The downside is keeping things in sync.

]]>* Longest chain of monthly reference to module in other sources (e.g. stackoverflow, non-Perl blogs, RosettaCode, etc.). Modules that show how Perl can be usefully extended outside of the echo chamber. Hard to tally and prevent gaming though.

* Longest chain of an author contributing to unique non-owned modules (that is, each week/month they have to submit an RT, issue, patch, pull request, etc. to a module that they don't own, each module only allowed once in a chain).

My taste runs to trying to increase the quality of what we have, vs. putting out yet another variation on some module we already have. Measuring this is certainly harder than tracking new CPAN submissions though.

]]>I am going to do a LibTomMath-based module that's self contained (see CryptX, for instance). Interface similar to Math::GMP probably with some additions (e.g. modular inverse, exponentiation, etc. similar to Math::BigInt). It certainly won't be as fast as GMP for the big stuff, but it will be fine for most uses and I will be able to include it as a dependency and still work on all platforms.

For Algorithm::AM, I'm not sure if any of this would help or not. LTM will be tucked away inside the Math::LTM module where you couldn't get to it (let me know if there is a way, but given we don't install the C headers as part of the module, for instance, I don't see how it would work). If the concern is performance and readability, you could use some generic functions such as a subset of the GMP API (e.g. mpz_init, mpz_set_ui, mpz_add, mpz_mul, mpz_get_d, mpz_get_str, mpz_clear), then have Makefile.PL / the compiler make a decision which to use (1) gcc's __int128_t type, (2) GMP (no functions needed, just include GMP.h), or (3) by-hand. Good for performance, not so good for testing. GMP is installed on quite a few systems these days, but definitely not all.

]]>** invmod(a,n)** computes the modular inverse. Similar to Pari's

** vecsum(...)** returns the sum of a list of integers. What about List::Util::sum! I was using that, until I started wondering why my results were sometimes incorrect. Try:

my $n = int(2**53); die unless $n+13 == int(sum($n,13));

The problem is that List::Util turns everything into an NV, which means it lops off the lower 11 or so bits of 64-bit integers. Oops. min and max have the same problem, so the next release of MPU will probably add vecmin and vecmax functions.

** binomial(n,k)** computes a binomial coefficient. I discussed this in a previous post. It uses the full Kronenburg definition for negative arguments, and as much computation in C as possible for speed.

** forpart { ... } n[,{...}]** loops over integer partitions. Pari 2.6 added this, and I thought I would as well. In its simplest form, it gives the additive partitions like the Integer::Partition module, just much faster (albeit not using an iterator). It also allows restrictions to be given, such as

forpart { say "@_" } 10,{n=>5}to only show the partitions with exactly 5 elements. We can use amin and amax to restrict by element size; nmin and nmax to restrict by number of elements. Of course any restriction can be done inside the block, but using the passed-in restriction means the block doesn't get called at all -- important when the number of unrestricted partitions is in the tens or hundreds of millions.

Performance

Wojchiech Izykowski has been working on fast Miller-Rabin testing for some time, including, for a few years now, hosting the Best known SPRP bases collection. He's also been working on Fast Miller-Rabin primality test code. I sent him a 1994 paper by Arazi a while back, which he managed to turn into a performance improvement to his 2^64 modular inverse, and some more back and forth led to even faster code. That combined with a few other changes to the Montgomery math resulted in a close to 50% speedup in the primality tests, which were already blazingly fast. On my 4770k it's now less than a microsecond at any 64-bit size, and 2-5x faster than Pari/GP. The caveat being that the fastest Montgomery path is only used on x86_64. Regardless of platform, the results for any 64-bit number are deterministic -- there are no false results because we use the BPSW test.

I also made a small speedup for small integer factoring, which helps speed up a few functions that call it, e.g. euler_phi, divisor_sum, moebius, etc. Useful for shaving off a little time from naive Project Euler solutions perhaps. I had a few tasks that did a lot of calling of these functions for small-ish values, and while they're already pretty fast, every little bit helps.

What's next?

For minor updates, I already mentioned vecmin and vecmax. I have some improvements to legendre_phi that should be done. I'm thinking is_power may get an optional third argument like Pari that gets set to the root.

Math::Prime::Util::GMP has implementations of valuation, invmod, is_pseudoprime, and binomial now, to help speed those up. I'll probably add vecsum as well. I have a speedup coming for primality testing of numbers in the 40 to 5000 digit range, which is important for the applications I'm running.

Lastly, I'm really hoping to get time to make an XS bigint module, which should give a big boost to the cases where we need bigint processing but don't have GMP. Math::BigInt is a lot slower than I'd like.

]]>