Well it seems I am making a little progress with D&D 'Character' classes I think I even got a role that will stick. I did have a little unfinished business from my previous post with my 'Character' class that I have to fix.
Namely it is important to the game that I know the current value of an ability which may change, and the initial value which will not, but I do not want to add in all sorts of attributes with funny odd names. So putting on my game design/players hat for a second, the initial ability values (rolls) are use mostly at character creation, for race and class selection. After that they are only useful as tombstone data.
An example is if a character is enchanted somehow or is just getting old and looses some 'Constitution' her 'Resurrection Survival' throw will lower but the maximum number of times a 'Character' can be resurrected is the initial 'Constitution' so I need that as well.
Well that is the way I feel today, but at least, like Marvin, I am ready to start again. I tried in my last post to introduce a role into my D&D classes and all I ended up with what is what I started with, a muddle!
Well having another look at 'Intelligence', 'Characters' and 'Languages' I know that all 'Characters' have at least one language and they can learn more, the more 'intelligence' they have. So language ability is more of a trait of 'intelligence', rather than a role that intelligence fulfills. Hold on a 'trait' in Moose is just a role by another name so lets have a look?
This is part 3 of an ongoing series where I explore my relationship with Perl. You may wish to begin at the beginning.
This week we look at the joys—and the frustrations—of Moose.
Last week I talked about why I believe that at least having the choice to program object-orientedly (without it being a huge pain in the ass) is vital to me as a programmer. In the comments I also touched briefly on why I think OOP in Perl, pre-Moose, is a huge pain in the ass, but honestly I didn’t put much effort into making that point. In my experience with other programmers (both in real life and online), the majority of them—perhaps 75%, at a rough guess—already agree with that and don’t need any convincing from me. And the rest are most likely not going to be convinced no matter what I say. Besides, this isn’t really a persuasive essay. I’m not trying to change anyone’s mind. It’s the story of my relationship with Perl. And I, like most other coders I know, have placed some value in OOP, at least in some circumstances. Also, I, like most other coders I know, felt that doing OOP in Perl was a bit of a chore.
Several Pinto users have asked how to install all the modules in any given stack within their repository. At first, my response was to create a Task module that declared all the dependencies you need, and let your installer unwind the dependencies from there. Or better still, organize your app itself into a CPAN-style distribution with the dependencies declared in a META file, and then stick it into your Pinto repository.
But not everyone was thrilled about those ideas. Some folks just want to stash stuff in the repository and then say "install it all". And now you can! Read on to learn more...
Until the past year I never truly appreciated how much faster make will build an executable when you are running on some heavy iron and use the -j option. As a p5p committer I have a shell account on dromedary, a server donated by booking.com++ and maintained by Dennis Kaarsemaker and friends (more ++). On that box I set TEST_JOBS=8, which enables me to configure, build and test perl so fast that I have never bothered to time it (probably less than 5 minutes).
Today, however, was the first time that I learned that running make -j${TEST_JOBS} can obscure the results of make.
I left off from my last D&D post still looking for a role, but at least I settle on simple base class 'Ability' which I will extend into six abilities.
Now looking at each ability they all have differing effects depending on the value. So lets look at 'Intelligence' this time. According to the 'rules' it seems the more Intelligence you have the more languages your character can speak,as well for Magic User characters it betters the chances of your character learning a 'spell', sets higher the maximum and minimum number of spells they can learn per level and sets the highest level spell they can use. It even limits you on what race or class your character can take on. However, having high intelligence does not impart spell ability it only enhances it if you happen to be a MU. Likewise very low inelegance does not stop you from speaking at least one language.
Although percentage wise the submissions are up, the actual number of respondents are just slightly lower than previous years. Though it has to be said I'm still pleased to get roughly a third of attendees submitting survey responses. It might not give a completely accurate picture of the event, but hopefully we still get a decent flavour of it.
The Perl 6 Advent Calendar, Day 18, in addition to show perl6's builtin grammar facility, was adressing a fundamental aspect of text processing, i.e. native unicode support in a grammar.
Indeed, if we say text processing, we say also characters-oriented framework. The perl6 example was the occasion to test Marpa::R2, and produce a tiny tutorial with it.
A card is a face followed immediately by a suit.
Perl6's definition:
token face {:i <[2..9]> | 10 | j | q | k | a }
proto token suit {*}
token suit:sym<♥> {}
token suit:sym<♦> {}
token suit:sym<♣> {}
token suit:sym<♠> {}
UPDATE [23 december, 2013] character class version:
My participation to the CPAN Once a Week contest forces me to find a module to create or update every week. And because I don't want to cheat, it has to be a meaningful change (I also try not to make a new Acme::MetaSyntactic release every week).
This week, I decided to look at my module list through the filter of CPAN Testers.
[One of them was really not looking good](
http://matrix.cpantesters.org/?dist=WWW-Gazetteer-HeavensAbove%200.18)
We used to give the White Camel Awards out at conferences because we knew the recipients would be in the audience. As we've become more inclusive with the rest of the globe, that didn't make sense. This year, we've waited until Perl's birthday to announce the awards.
The White Camel Awards recognize outstanding, non-technical achievement in Perl. Started in 1999 by Perl mongers and later merged with The Perl Foundation, the awards committee selects three names from a long list of worthy Perl volunteers to recognize hard work in Perl Community, Perl Conferences, and Perl User Groups.
This year we took nominations through Mob Rater. We developed a long list of worthy recipients.This year, the White Camels recognize the efforts of these people whose hard work has made Perl and the Perl community a better place:
Please bear with me. You need to hear a little bit of a story, before we can talk about success and failure.
When I was 19 years old, I started writing my first game, called deadEarth. It was an adventure role-playing game set in a post-apocalyptic wasteland filled with crazy mutants and tons of violence. Exactly the kind of thing a 19 year old college sophomore would be in to. It was cheesy, and I took myself way too seriously, but it was so much fun! The original manuscript was only 9 pages. Over the next several years, with the help of a bunch of friends, it became a 174 page book that I self-published.
As I am cozening up to Moose these days I wanted to play with Roles as one is suppose to be able to scurry around the Diamond problem with them, and having some background with Lisp and Smalltalk I knew of them.
Unfortunately like many many many other tutorial on OO we stayed firmly in the barnyard (well in moose's case a
pet store
) which of course is fine if you are an animal lover but I am always modeling something that is a much more complex as the idea of a '
Role
' is much more abstracted and mulit-layed than differing animal noises as this little syllogism points out;
All dogs have a bark.
But not all things that have a bark are dogs.
So what would be a good example of something that is complex, mulit-layered, and demonstrates, to me at least, how roles should work?
Well I looked over on my book shelf and this caught my eye;
When it comes to repository structure, my philosophy has always been that you should have as few files as possible with as little redundant information as possible, but all files that are in the repository should end up in the final tarball, and all content be identical to what ends up in the final tarball.
This is very different from Dist::Zilla, which has an extra config file and content that has POD with template content in it.
When you look after a few hundred modules, the less files you have to edit or save the better.
But I recognise this approach isn't for everyone, so I've never encouraged others to follow my practices, and I've never packaged my scripts up into a module. Until now.
As my modules move into GitHub they inherit this legacy of this minimal files approach, at least in the short term until I start changing over the distribution structure to a more conventional style.
I wrote CUDA::Minimal back in late 2010/early 2011 and used it in some of my research before defending my Ph. D. dissertation in May of 2011. At my postdoc I didn't use parallel anything, so CUDA::Minimal languished. When I picked it back up, it didn't even compile on a modern version of Perl. It was disheartening, to say the least.
But now, once again, it works (as long as you're not using Perl v5.16)!!!