Wouldn't it have been great to read that two clever programmers had collaborated and that DateTime version ??? was backward compatable and really flew.

Then the benefits would have gradually crept into our codebase without me getting into another argument about standard modules.

The stats quoted for Time::Moment are impressive.

At a glance, the basic constructor arguments and basic attributes are compatible with DateTime making it a drop-in replacement in some cases.

However, the basic arithmetic operations are different. These are fairly common operations such as adding hours or days. This will make it very limited as a drop-in replacement.

Is there a case for some sort of DateTime::Facade namespace to hold modules which put DateTime compatible wrappers around other date/time modules? There would be an overhead in the extra layer of abstraction, but this might not be large.

]]>Slightly worried about what I've written there. I'm writing about an argument at $work, not with anyone on this list.

]]>Crowd funding is an interesting approach to decentralizing the event organizers job: rather than pay a certain ticket price (or sponsorship amount) and leave it to the organizers to figure out how to get speakers to their event, the community votes with their own money to decide who they want. The model seems to work well so far. But while decentralizing can be robust for certain things, it also abandons the economy of scale.

I'm worried about how much crowd funding conferences will scale when more and more speakers attempt to jump on board. What happens when the coolness of crowd funding wears off and you have 30 or 40 speakers asking for money to go to various YAPC's? What happens to events when we start to see more failed attempts at funding than successful ones?

You've inspired me to write my own post on how much YAPC's cost these days. The figures may be surprising to some people.

]]>For reference: https://github.com/leejo/CGI.pm/issues/109

]]>It will actually make life easier for anyone who's grabbing the distname from the metadata, then using it to look the dist up on various systems. Currently such an approach will get a distname of 'CGI.pm', which it won't then find on various core services.

It was me who raised the ticket :-)

]]>On Saturday, maybe? ;-)

]]>]]>

Time::Moment can never be "backwards" comparability with DateTime.pm due to DateTime.pm's design.

These aren't huge numbers, but from the Math::Prime::Util documentation:

is_prime from 10^100 to 10^100 + 0.2M 2.2s Math::Prime::Util (BPSW + 1 random M-R) 2.7s Math::Pari w/2.3.5 (BPSW) 13.0s Math::Primality (BPSW) 35.2s Math::Pari (10 random M-R) 38.6s Math::Prime::Util w/o GMP (BPSW) 70.7s Math::Prime::Util (n-1 or ECPP proof) 102.9s Math::Pari w/2.3.5 (APR-CL proof)

Math::Prime::Util with the GMP backend will support hundreds of thousands of digits, and is probably the fastest code for large numbers other than OpenPFGW's Fermat test, and is substantially faster than any of the other Perl modules. See this stackexchange challenge, or Nicely's list of first occurrence prime gaps where I used this module.

Caveat being that without Math::Prime::Util::GMP installed, it uses Math::BigInt (with GMP or Pari backend), which is super slow. My todo list has some sort of replacement to get a bigint solution that is both (1) portable assuming XS, and (2) reasonably fast. Also, there are some nice optimizations for x86_64 as well as 64-bit in general. It is still fast on non-x86 machines, but it will miss some of the better optimizations (asm mulmod, montgomery math).

Math::Pari, Math::GMP, Math::GMPz, and Math::Primality will support bigints pretty well. For the two GMP methods you'll have to decide how many tests to use. Math::Pari really needs to be updated to use a newer Pari by default -- the current version will do 10 M-R tests and is quite a bit slower than when built with Pari 2.3.5.

Math::Prime::XS does not support bigints. For 64-bit primes it is about 3-4 million times slower than MPU on my machine (but should be fast for most composites).

Math::Prime::FastSieve is going to eat a lot of memory and time making the sieve once we're past 10^8 or so. The answers are fast once done, but it's not the best solution. It took me 2 minutes to sieve to 10^10, and beyond that will take GB of memory.

Trial division is exponential time so even with C+GMP is not going to be practical past 25-30 digits (and is hideously slow at those sizes). The Perl code is just going to get worse.

Time for primality proofs is another discussion -- I'm writing some web pages on that since I realized I keep writing the same thing on forums.

For the largest known primes, we'd want to use a Lucas-Lehmer test since they are Mersenne primes. I have not added any special form tests (nor have the other modules), but the LL test is pretty straightforward. They would still take a long time. The largest currently known prime has 17,425,170 digits. Using code specifically made for this, it took 6 days on a 32-core server and 3.6 days on a GPU.

For a general form numbers, last year some people ran tests on a couple Wagstaff PRPs with ~4 million digits. OpenPFGW took 4-70 hours to show they were Fermat PRPs, and 5 days for the Lucas test. A fast Frobenius test implemented with GMP took slightly over one month.

]]>132.1 Perl trial division mod 6

291.7 Perl trial division

9.8 Math::Prime::Util

2.5 Math::Prime::Util with precalc

6.7 Math::Prime::XS

On this machine, Math::Prime::XS's simple trial division loop is faster than the non-cached routine I use in MPU until 3e7. Part of this is that MPU uses UV internally while MPXS uses "unsigned long". On this machine UV is "unsigned long long" (64-bit) and unsigned long is only 32-bit. That means MPXS is 32-bit, so doesn't work past 2^32 and probably explains the speed difference as well.

]]>The first category are the Damians of our community. These are people whose attendance at any conference provides an immediate and obvious benefit every time you'd invite them.

People like me and ribasushi are a different category. While I do think we both make a positive contribution to a conference, it's not anything near the value that someone like Damian would provide. Instead, I believe our crowdsourcing success is based on community goodwill. We do a lot of open source work that we aren't getting paid for. There are a lot more people who would be sponsored if they tried, but unlike Damian I don't think they/we could get sponsored time-after-time in that fashion, exactly because next year there will be someone else wanting to do the same.

]]>