To start off I atennded Sawyer X rather sad talk 'No one is Immune to Abuse' I cannot give this talk do justice in a short blog post so I suggest you seek out the recorded talk when it comes out. I will see if I can repost it here once it comes out. It had a large number of lessons for all of us in the Perl comunity.
Next up was me but unfortunetly I have some connection errors and had a little shorter talk and but both of the attendees gave me good feedback ;) oh well that is what happens when you are scheduled at the same time by two of the 'Big Wigs' in the Perl community. Luck of the draw I guess.
I hung around in the same room after my talk and stayed for Lee's talk on how he is recreating his childhood by playing Legend of Zelda, Not a perl talk but an interesting one none the lest as Lee when into some detail how hackers out there have modified the game by simply moving where all the 'Treasures' are normally found, So now one can play the game in a whole new way.
After lunch with an old school chum of mine at is club and catching up with 40 years of personal history and planning for school meetup in the coming fall, but that is another story.
I then went to the Meet the TPRF talk and believe me it was nice not to see the same old faces, there is a new generation of leaders at the TPRF and they are bringing a little life back into the brand.
Next I went to Mohammad Anwar's talk on his Perl and Roku Weekly challenge, and though the turnout was better than mine I was a little disappointed that more people did not show up for the White Camel award talk, again that is what happens when you scheduled a big wig at the same time. Anyway I was taken aback by the amount of work and the community that Mohammad has created, I remember here on the blog when he started out just really as something to kill the time between gigs. I guess I should spend more time looking into the challenge though I never really get any of them
As the last talk I attended was Dave Cross's talk on GitHub for development, Now at least I understand how my companies CI system works as it has always been a black box to me.
Well that is about it as the lightening talks when next, and then the announcement where next years conference so I guess I might be seen you again in Las Vagas
]]>That was a Local joke now onto what I got up to today.
Ovid despite Air Frances best efforts actually did make in to the conference late the night before so he was able to give his Keynote on OO in the Perl Core, Seems we will be getting something called Corinna Soon we will have Field, Class, Role and Method to play with and if you are brave you can get the latest version of perl and play with a few parts of it. The key sticky part is Typing, Seems there is another project out there called Oshum to handle all those nasty typing problems. Well to quote the main character Sweden's best know literature
'We shall see, what we shall see'
Next there was a talk by Dimitrios Kechaglas who gave some very good insights into how important it is to at least try to do a little optimization to your code. The best takeaway was do the obvious stuff first 'Profile' your code and then make the decision if that last 3% is worth the effort. On the technical side of things I did take away one lesson on the proper way to call encode for dealing with UTF8 will have to give it a try on my code base once I get back to the office, but I will have to do some profiling first.
Lunch was next and I took the opportunity to visit a few haunts from the last time I spent any time In The Big Smoke, and I guess one cannot relive one's youth, as nothing much remains of the few hot-spots I visited so many years ago.
Well Cam Beaudoin was up next and his talk on why Disability inclusion was important to even us programmers, seems we all benefit when an accommodation is made for someone, I will think of this each time I have two coffees in my hand and press the automatic door when I go into a building. Not only do we all benefit physicaly but this sort of inclusion brings different perspectives that may benefit the bottom line.
Lee J put down his camera for a few mins and gave a talk on Continuous Deployment , seems we have gotten it right where we work as we follow most of what Lee was suggestion but there was one good takeaway, Organizations of medium sized will have to be much more careful with the CI process. When something goes South, and it will happen, a small organizations can recover quickly simply because they are small, the large one might be able to stop mid process before everything comes down so they still have something up and running. However with a medium organization you can neither recover quickly or stop half way though. Something to think on.
Next was a new one on me a pre-recorded presentation, Paul Evans gave a What's New talk on Perl 5.38. Unfortunately Paul recorded his presentation while the M1 was busy so it was sometimes hard to hear, Paul might want to talk to Cam about accommodations ;), anyway it was neat to see what was coming seems Ovid's OO stuff is there an works, Paul even was online after the canned talk for a few remote questions.
Just for fun, and it was, I attended Gene Boggs presentation on 'Algorithmic Music' It seems that Perl can sing or at lest produce MIDI muzak. The funny thing is his little perl programs have great potential for teaching and learning tools, will have to look into that later.
Finally Aadharsh Hariharan presented on the academic work he has been pursuing for his Masters degree. Seems he was the one conducting that Perl survey that was put out some time ago, He presents a good model of what the results of the survey was and some follow on interviews, and no real surprise here we in the Perl community have some work to do, but at least with the model he produced we can measure if we are having any progress in getting Perl back into the mainstream again. An important step IMHO.
Anyway time review my slides one more time for my talk tomorrow.
So there are about 100 us us Perl types here today, with most participants coming from across the US and Canada but there are a few that came over 'The Big Pond'
So far The talks have been very good, I had a and interesting Talk on Test2 by Chad Granum something I will have to look into as my old test suite is becoming a little flimsy and is a patchwork of kludges,
The next speaker I took in as Joelle Maslak, who gave a very good overview of how why an IP packet will take a 4 step bounce from the US to France then back to the US seems the answer is just not what one will think,
I was Finally able to hear a talk by Ingy Dot Net something I have been wanting to do for many years but I always seem to be at the wrong place at the wrong time. Ingy gave us the 25c tour of a new language he is working on YAML Script. We will see if is plan to bring a number of the programming communities closet together will work.
Gilles Darold was next and he gave a good albeit shot presentation on PGCluu a system to monitor and report on you PGSQL server written in perl. Something I am going to have to take a close look at now that I am working in a PG shop.
Finally I attended a talk that will scare most of the faint hated, 'Remedial Math For Programmers' (brings back memories of my last year in high school, he says while curled up in the fetal position under his chair) Walter Mankowski was the speaker and for once I actually did not sleep though a math talk, and I actually understood about 99% of the talk I guess I either learned a few things since failing grade 13 math or Walter just teaches it better.
All in all a good day as I met a number of fellow Perl types I haven't seen since this silly pandemic effected us all.
I even think I am getting a little social skills back but the jury is still out on that one.
TTFN
]]>I updated my Mojolicious for the first time in quite awhile and my personal web server died.
Well this on was 100% my fault as the error was
Can't locate object method "route" via package "Mojolicious::Routes" at /johns/perl/Mojolicious-Plugin-Routes-Restful-0.03-2/blib/lib/Mojolicious/Plugin/Routes/Restful.pm line 124.
So I had a snoop around Mojolicious today and found this in the change log
8.67 2020-12-04 - Deprecated Mojolicious::Routes::Route::route in favor of Mojolicious::Routes::Route::any. - Deprecated Mojolicious::Routes::Route::over in favor of Mojolicious::Routes::Route::requires. - Deprecated Mojolicious::Routes::Route::via in favor of Mojolicious::Routes::Route::methods.
Ok and then a little higher in the file
9.0 2021-02-14 … Removed deprecated detour, over, route and via methods from Mojolicious::Routes::Route.
Ok so there is my problem. A few minutes of re-learning how to use github than a quick swap of 'route' for 'any' and ' via' for 'methods' and I though I was all done
I ran the test suite and
t/10-basic-routes.t ..... ok
t/20-basic-rest.t ....... ok
t/30-advanced-routes.t .. ok
Mojo::Reactor::Poll: I/O watcher failed: A response has already been rendered at /opt/cgt/milk/local/lib/perl5/Mojolicious/Controller.pm line 154.
# Premature connection close
t/40-advanced-rest.t .. 16/?
# Failed test 'GET /V_1/myapp/project/1/view_users/1'
# at t/40-advanced-rest.t line 102.# Failed test '200 OK'
# at t/40-advanced-rest.t line 102.
# got: undef
# expected: '200'
# Looks like you failed 2 tests of 17.
t/40-advanced-rest.t .. Dubious, test returned 2 (wstat 512, 0x200)
Failed 2/17 subtests
Ok where did that come from??
Well a good few hours later as I exhausted all my google searches, I had traced the croak back to this line
croak 'A response has already been rendered' if $c->stash->{'mojo.respond'}++;
In Mojolicious::Renderer;
Being the ham-fisted programmer that I am I took out that offending line from my core perl, as I could find 'mojo.respond' nowhere else in the code and of course this was the result
t/40-advanced-rest.t .... ok
Oh goody a bug to report!!
I as I was busily writing up a bug report, happy in thinking that at least one of my problems for today was not my fault I had a little brainstorm just before I hit 'submit' and I wanted to know who to 'blame' for this obvious bug
Ok Sebastian is at fault, good!
However my better judgment kicked in, funny as never does that usually, and I decided to have a look at the diff for this change and I found the note
Throw an exception if double rendering is attempted
now that got me thinking and I had a look at the innards of my test and found this
sub get { my $self = shift;
if ( $self->req->method eq 'GET' ) {
$self->render( json => { status => 200 } );
}
$self->render( json => { status => 404 } );
}
opps I am doing a double render d'ho
So one 'else ' fixes that
if ( $self->req->method eq 'GET' ) {
$self->render( json => { status => 200 } );
}
else {
$self->render( json => { status => 404 } );
}
t/40-advanced-rest.t .... ok
Boy I am glad I did not hit submit on that bug report.
So if you run into that croak message
Mojo::Reactor::Poll: I/O watcher failed: A response has already been rendered at /opt/cgt/milk/local/lib/perl5/Mojolicious/Controller.pm
Check you code as it is not Mojolicious doing you wrong.
]]>Been quite a year for every one on this big blue marble. I hope you are all good.
Ok here is the very short post for today.
I just did my first build and upload of PAWS to CPAN
Expect Version 0.43 to be up there later today some time.
It was a bit of an epic on my part as this whole releasing thingy, made some real bad goofs (deleting then checking in a folder), thank goodness for 'git revert' and getting the version number wrong.
Hopefully is comes out ok.
Look for more releases in the future.
]]>
Egad have been away for a while, it is not due to laziness on my part, I really have been stuck on a Paws problem over the past month+, add to that dozens of inside and out sided projects that I need to get done around the house time has just not been there.
At least I have finally cracked it.
I really went down a rabbit hole for this one and spend way too many hours trying to figure out how to test 'Paws Pagination' end to end.
In my last post I started out with a new test suite '30_pagination.t' and a few test YAMLs.
Just getting the YAML just right took God only knows how many iterations. I also had to create a completely new caller 'TestPaginationCaller.pm' to get the tests to work.
So here is the 25c story on how it works. I start with the normal two YAML files, one for content and one for tests. The differance is that I have both the 'request' and 'response' content and tests in each. The content file looks like this;
--- requests: - Limit: 2 - Limit: 2 ExclusiveStartStreamName: Test1 … - Limit: 2 ExclusiveStartStreamName: Test5 responses: - content: "{\"HasMoreStreams\":true,\"StreamNames\":[\"Test1DataStream\",\"Test1\"]}" headers: content-length: 65 content-type: application/x-amz-json-1.1 status: 200 - content: "{\"HasMoreStreams\":true,\"StreamNames\":[\"Test2DataStream\",\"Test2\"]}" headers: content-length: 72 content-type: application/x-amz-json-1.1 status: 200 … - content: "{\"HasMoreStreams\":false,\"StreamNames\":[\"Test6DataStream\"]}" headers: content-length: 53 content-type: application/x-amz-json-1.1 status: 200
while my test YAML looks like this;
--- call: ListAllStreams service: Kinesis request: pages: - tests: - path: content expected: "{\"Limit\":2}" op: eq type: json - path: headers key: content-type expected: "application/x-amz-json-1.1" op: eq - path: method expected: POST op: eq - path: parameters key: Action expected: ListStreams op: eq … response: pages: - tests: - type: ARRAY expected: - Test1DataStream Test1 ...
To run tests tests I have, '30_pagination.t'. I won't put all of that code here, suffice it to say I just load in the content and tests and then iterate over each of the 'pages' and run the tests.
Sounds simple enough, but here is the rub.
When I set up the call to 'ListAllStreams' with the added in pagination sub like this;
my @shards =();
my $count =0;
my $ListOutput = $s3->ListAllStreams(
sub { push(@shards,$_); $count++;
},
Limit => 2
);
I ran into an endless loop; The reason for this was in the auto-generated Kinesis.pm code;
if (not defined $callback) {
…
} else {
while ($result->HasMoreStreams) {
$callback->($_ => 'StreamNames') foreach (@{ $result->StreamNames });
$result = $self->ListStreams(@_, ExclusiveStartStreamName => $result->StreamNames->[-1]);
}
$callback->($_ => 'StreamNames') foreach (@{ $result->StreamNames });
}
You can see there is a while loop that will never be broken without changing the Kinesis.pm file which I was not about ready to do.
So there I am thinking it would it would be a rather easy task, just set up the call and test the output. Sister! Was I ever wrong!
I tried every Perl Monk incantation on this one with little or no luck until I finally figured out that at I could at least pass a call to a local sub in that 'callback'. I tried that like this;
$result = $service->$call_method(
sub {
_test_result($_ );
},
%{$request}
)
Now I had at least something to play with. Well I eventually broke out of the endless loop but only by using a 'die' which was not really a long term solution.
With the 'Callback' sub calling another sub I eventually was able to test the content from the response. I had to add in a few new attributes to my 'TestPaginationCallerpm' to get what I need in the sub. In the end my call looked like this;
_test_result($service->caller, $service->caller->response_test->tests->[0], $_ );
and the sub like this;
sub _test_result {
my ( $caller,$test, $result ) = @_;
my $count = $caller->counter();
my $max = scalar(@{$test->{expected}})-1;
my $expected = $test->{expected}->[$count];
ok( $expected eq $result,
"Got expected result for #$count $expected=$result" );
if ( $count == $max ) {
$caller->reset_counter();
die;
}
$caller->inc_counter();
}
I just let the while loop run until my internal counter is reached then die. It worked, but not a solution;
I had but one option left, something I have never done in Perl. As a mater of fact the last time I did this I was playing with Applesoft BASIC on a Unitron 2200.
I give now fair warning to any sensitive readers out there to stop here and go read something else.
Please please do not look at the next few lines of code it may cause gnashing to teeth and pulling of hair;
You have been warned!!
Last chance!!
You will regret it!!!
Think of the children!!!
I solved my problem like this;
$result = $service->$call_method(
sub {
_test_result($_ );
},
%{$request}
)
NEXT_REQUEST:
…
if ( $count == $max ) {
$caller->reset_counter();
goto NEXT_REQUEST;^
…
Actually after I go the above squared away the rest of the changes where just simple cut and paste job to test the 'requests' while in the 'do_call' sub 'TestPaginationCallerpm.pm' rather than in the t/30_pagination.t. file. Hence the Test on the first part if its name.
Finally back on track and hopefully my next post will not take as long.
]]>Anyway to recap where I left off I was just getting the 'SubscribeToShard' action to work with a HTTP stream to work, after a fashion anyway. Then I got side tracked a little playing about with the problem of testing if the stream was correctly sending data down the pipe and if I was decoding it correctly.
As a byproduct of getting to the bottom of that I finally figured out what the PAWS 'Paginators' are for and I guess how to use them.
I noticed the odd "NextToken" tag in some of the Boto Json files as well most of the services have a ''paginators-1.json' definition file as well and looking at the Kinesis pod I see that there paginators listed.
PAGINATORS
Paginator methods are helpers that repetitively call methods that return partial results
…
Well I had to look into this as it is something new form me as I had never come across in my PAWS journey. Poking about I did find that the paginators are being generated by PAWS but there are very few test cases only 4 for the S3.
To get the paginators to work all one has to do is pass in an antonymous sub when calling the action. Below you can see this in its simplest form;
my @pages;
my $count = 0;
my $ListOutput = $kinisis->ListAllStreamConsumers(
sub { my $page = $_;
push(@pages,$page);
print("/nPage: $count");
$count++;
},
StreamARN => 'arn:aws:kinesis:us-east-1:985173205561:stream/TestSteam5Shard',
MaxResults => 1
);
print “/nMy page count was $count”
print “/nI got ”.Dumper(\@pages);
When run the output looked like this;
Page: 1
Page: 2
Page: 3
My page count was 3;
I got [ bless( {
'ConsumerName' => 'TestKinesisApp',
'ConsumerARN' => 'arn:aws:kinesis:us-east-1:985173205561:stream/TestSteam5Shard/consumer/TestKinesisApp:1581111187',
'ConsumerCreationTimestamp' => '1581111187',
'ConsumerStatus' => 'ACTIVE'
}, 'Paws::Kinesis::Consumer'
…]
So it looks like I have another void to fill. There is a generic test rig for paginators present and at first glace it looks much the same as the rig for Requests and Responses but alas no automatic test generator.
Well it seems I have learned at least one thing on my PAWS journey and that is get the test squared away first. So keeping that in mind the plan for today is to come up with another caller/test generator that will create test for the paginator So like my other PAWS adventures I can write new real world test scripts and create the canned test from it at the same time.
Now where to start?? I guess 't/26_paginators.t'. Well I had a look in there and I could see it was only ½ made up with the service and action hard coded in place. Not much to start with.
Unfortunately I will not be able to reuse the 09 and 10 test suites because for a full test I have to provide both the 'Resquest', to test the anonymous sub hookup and the 'Respose' to see how PAWS handles the 'sub' part of the response which will do the iteration.
So a new test suite is required and I am going to call it 30_pagination for now.
I also had to do a deep dive into the guts of PAWS and to how it does pagination so I could get a handle on how to test it properly. The patter is the same throughout PAWs each of the paginators are a sub found in the class of the Action.
I am presently working on 'ListAllStreams' and if I go into the “Paws::Kinesis” class there is the following implementation of the sub;
sub ListAllStreams {
my $self = shift;
my $callback = shift @_ if (ref($_[0]) eq 'CODE');
my $result = $self->ListStreams(@_);
my $next_result = $result;
if (not defined $callback) {
while ($next_result->HasMoreStreams) {
$next_result = $self->ListStreams(@_, ExclusiveStartStreamName => $next_result->StreamNames->[-1]);
push @{ $result->StreamNames }, @{ $next_result->StreamNames };
}
return $result;
} else {
while ($result->HasMoreStreams) {
$callback->($_ => 'StreamNames') foreach (@{ $result->StreamNames });
$result = $self->ListStreams(@_, ExclusiveStartStreamName => $result->StreamNames->[-1]);
}
$callback->($_ => 'StreamNames') foreach (@{ $result->StreamNames });
}
return undef
}
As you can see it follows one of two paths. No callback, which I do not care about and Callback. There is not much in there for the callback. It just takes the current result, which in this case is a 'Paws::Kinesis::ListStreamsOutput' that has already been coerced from the query, iterates over each of the values in the 'StreamNames' attribute and passes each those off to the 'passed in sub' then it gets the next result and repeats till nothing is left.
As the above is template generated code, there is not much I can do here on the testing side so I am really just limited to looking at the output which in this case is just an array ref of 'Stream Names'.
Hmm.
Well the end result could be almost anything you want but in the simplest form with a call sub like this;
sub { push(@shards,$_)},
you would get just something like this
[
'Test2DataStream',
…
'TestStream11',]
So I started by creating the two standard tests YAML files. I have the .test files
--- call: ListAllStreams service: Kinesis tests: pages: 5 items: 10 pages: - expected: ARRAY op: eq values: - Test2DataStream - Test3 - expected: ARRAY op: eq values: - Test2DataStream - Test3 - expected: ARRAY op: eq values: - TestSteam5Shard - TestStream10 - expected: ARRAY op: eq values: - TestStream11 - TestStream2 - expected: ARRAY op: eq values: - TestStream5 - TestStream6 - expected: ARRAY op: eq values: - TestStream8 - TestStream9 - expected: ARRAY op: eq values: - TestStrem4and
--- request: params: MaxResults: 1 NextToken: mynext StreamCreationTimestamp: 1581111187 response: pages: - {"HasMoreStreams":true,"StreamNames":["Test2DataStream","Test3"]} - {"HasMoreStreams":true,"StreamNames":["TestSteam5Shard","TestStream10"]} - {"HasMoreStreams":true,"StreamNames":["TestStream11","TestStream2"]} - {"HasMoreStreams":true,"StreamNames":["TestStream5","TestStream6"]} - {"HasMoreStreams":true,"StreamNames":["TestStream8","TestStream9"]} - {"HasMoreStreams":false,"StreamNames":["TestStrem4"]}
for the content file.
I did look about in the various pages and they all seem to return everying from simple scalars, arrays of scalars, single classes and collection of classes. So to account for this in the 'expected' I can use values ARRAY, HASH or even the Class name or perhaps even something like ARRAY[CLASS].
Well I guess that is my next post is to come up with a test for the above.
]]>In my last post I manged to get 'SubscribeToShard' to work with my stream decoder though it is really just beta code for now. What first go me distracted was reading along in the Amazon doc I saw a bit about streaming an audio file.
Well the last time I worked on this sort of stream was in the dieing days of the last century??
This got me thinking and I went downstairs and dusted off my good old 2201 and fired it up thinking it might come in useful. Next I had to find some 'C' code and files I had from that time that I think I had on on 3.5 floppy in my upstairs closet.
Well I found the disks and once my HP Pavilion booted up I found that the disks where sill readable and still compiled. Well step one and two done.
Now the test plan I had in the 'C' code is used to confirm that compressed files coming off of streams where coming down correctly. So to accomplish I you would put a canned RealTime file up on the Web then run 'C' program which will set up two workers. One to read the data coming in from the socket and one to read another local version of the file Nibble by Nibble (boy am I old) so it will always know what is suppose to be coming down the pipe. Then you started the test by trying to stream the file.
The 'C' program then sends two signals to the two channels of the Digital to Analog Board I have on my HP and from there to the two channels of the Oscilloscope where the two traces should be out of phase by 4 bytes (one nibble) and visually one could see from the pattern if the data was steaming correctly.
Well just a few problems with the above plan.
First I could not get connected to the Web with the HP, seems windows 3.n is not longer supported and I had no where to plug in my 56k modem.
Second the Digital to Analog board was no longer in my HP (not sure where it went).
Third when I fired up the O/Scope for the first time in 15+ years nothing happened except that acrid smell of old electrolytic capacitors in their death throws and then the single green dot. The O/Scopes version of the blue screen of death.
I suppose I could recap my O/Scope then calibrated it, (would need to buy a high end signal generator and another scope to do that), then I could run that 'C' program on my laptop after I recode it a little to send the digital output to two SD ports. However I would have to make a brace of these
so I could get the digital output into analog so my scope could read it correctly.
Well it would be fun to play with valves again and get my scope working but the time to do all above would see my post count drop to 0 for a few months (though some might like that) and empty my pockebook a little.
Anyway seems all the above was a little premature as I failed to read the rest of the document from AWS where they explain that part of the audio stream is a checksum that can be used as it it streaming.
So I guess diging all that junk out of the basement was for naught.
I did discover that I have left a few things out of my stream decoder namely the ability to stream chunks of data and to limit my streams to the max of one meg.
Well for now I can leave that out for now as the Kinesis data is normally only small amounts of text.
So way off the Paws track today. Hopefully I can get back on track in my next post.
]]>When I last posted on the Kinesis 'SubscribeToShard' action I discovered that it is returning a 'application/vnd.amazon.eventstream' and that lead me down a very deep rabbit hole that got me well sidetracked.
Well to start out I had to figure out what AWS was returning when it was sending 'vnd.amason.eventstream' I eventually found that here Event Stream Encoding
Ok time to take the way-back machine to my first play-dates with computers, assembling GIS data from an Amdahl mainframe that was spooling a 9inch tape directly to my Unitron 2000
over a 330 baud modem, Then taking the various bits, and putting them back together so I could draw pretty maps on this;
Though my one was the budget 880.
Anyway scratching my head a little I figured whatever the solution I come up with I am not going to treat the handling of this stream as an integrated as part of PAWs. It will have to be a separate CPAN mod the same as 'Net::Amazon::Signature::V4;'. A generic module and can be used anyone who may need it.
Now the only question is to call it 'AWS::EventStream::VND', 'Net::Amazon::EventStream::VND' or 'Net::AWS::EventStream::VND'?
At this point that doesn't matter I really just want to get it working I can sort that out later.
Well from past experience the first rule of working with any bit-stream is;
Always start with a data file never a stream.
So in my case I just dumped the content that was returning from 'SubscribeToShard' when it timed out after five mins and that looked like this;
^@^@^@r^@^@^@`«<82>^M<9e>^K:event-type^G^@^Pinitial-response^M:content-type^G^@^Zapplication/x-amz-json-1.1^M:message- type^G^@^Eevent{}¬®k}^@^@^@ò^@^@^@ej^NI<83>^K:event-t ….
Not very easy on the eyes. AWS dose provide an nice pattern diagarm to look at;
So lets tackle the Prelude and that is the fist 8 bytes and then 4 more for a CRC.
my $filename = 'shards';
open my $fh, '<:raw', $filename;
my $bytes_read = read $fh, my $prelude, 8;
Now to get the binary into something we humans can read;
my ($total_length, $header_length) = unpack 'N*',$prelude;
print "total_length=$total_length,header_length=$header_length\n"
and that will give me;
total_length=114,header_length=96
Ahh good old perl no need for anything fancy just one extra param on a read to get 8 bytes out of a file and unpack built right in. It did take a little while to figure out what template to use, I had to reach way back in my brain to my 'C' days. I guess ost of that data in there is now lost as all I remember is it uses some sort of template. Sort of shamed to admit I had to look up which one to use.
The next four bytes are a CRC checksum, that is used to ensure that you have decoded the first two correctly. It is a 'CRC' digest of the first two, but how to check them. Well CPAN comes to the rescue with ' Digest::Crc32'.
use Digest::Crc32;
$bytes_read = read $fh, my $prelude_checksum, 4;
my ($check_value) = unpack 'N', $prelude;
my ($checksum) = unpack 'N', $prelude_checksum;
my $crc = new Digest::Crc32();
if ($crc->strcrc32($prelude) != $checksum){
die "Prelude checkum fails!";
}
print "Prelude checkum Pass\n";
and when I run it I get
total_length=114,header_length=96
Prelude checkum Pass
So that is part one done. Really not much else to it. I did find this module very useful 'IO::Scalar'
The problem being you can't just read the full record of the stream and play with the bits. The structure forces you to jump around a bit (pardon the pun) in the stream and then find your way back to where you left off.
On my first iteration I think I had to make I think six position changes and resets. Thanks to IO::Scalar I manage to get that down to just one when I re-factored the spaghetti into a little module.
Eventually I got the decodeing working and what I was getting from the stream looked like this;
headers={
':message-type' => 'event',
':event-type' => 'initial-response',
':content-type' => 'application/x-amz-json-1.1'
};
message={}
which was the first message and most of the rest looked like this
headers={
':message-type' => 'event',
':event-type' => 'SubscribeToShardEvent',
':content-type' =>'application/x-amz-json-1.1'
};
message={"ContinuationSequenceNumber":"49604106570538379893614088729479714815975373587922026498","MillisBehindLatest":0,"Records":[]}
Now that that is done time to see if I can get a real stream to read.
It did take me quite some time to actually get this to work in a fashion and I will give you the quick review. Luckily I have played with event streams and HTTP before with Mojo mostly but the odd time with LWP so I at least knew where to start. As well I found a few test cases in the /t that helped out as well.
So I first needed to get direct access to the 'User Agent' that was handling the call to AWS. So I have to make an instance of my 'FullTestMakerLWPCaller' mod like this;
my $caller = FullTestMakerLWPCaller->new();
Which if you recall is just a mucked up version of 'LWP' so I can easily get to the 'User Agent' and what I want to do is add in a 'handler' for the 'response_data' event; like this
$caller->ua->add_handler(
'response_data',
sub {
my ($response, $ua, $h, $data) = @_;
my $es = AWS::EventStream::VND->new();
use IO::Scalar;
my $content = $response->content;
my $ios = new IO::Scalar(\$content);
my $output = $es->decode($ios);
print $output.”\n”;
return 1;
},
);
my $aws = Paws->service(
'Kinesis',
region => 'us-east-1',
debug => 1,
caller => $caller,
);
my $Output = $aws->SubscribeToShard(
ConsumerARN => 'arn:aws:kinesis:us-east-1:32938372322:stream/TestSteam5Shard/consumer/TestKinesisApp:1581111187',
ShardId => 'shardId-000000000000',
StartingPosition => {
Type => 'LATEST',
}
);
In the above I create an instace of my 'AWS::EventStream::VND' decoder, the get the content from the response convert that to an IO::Scalar then pass that to my decode sub with returns the decoded content which I then print. The next two statments just set the first call im motion.
The really important thing in the above it so included that return 1; in the handler sub or else you will only ever decode the first parts of the stream content rather than handling everything that is coming down the pipe.
I ran the above and did get streaming content printing though not much use as the above is rather hacked up code.
Paws does have a way to handle the above and that is its Pagination system. But that is another post.
]]>First things first, a little word on Kinesis. Well in short it touted as a very scalable real time data-stream thingy that sings dances and basically makes you line much better. Myself I do not havea use for it but it is part of the system and there is a bug so in I go.
I first had to set things up on the AWS server side with some permission etc the usualal srtuff I also had to run a number of command top build up my Kineses system to a point where I can actually use the 'SubscribeToShard'
I too by now my standard path I created a number of small scripts one each command I was using. Will I eventually had to make my way through 10 actions to get set up. Nothing lost though as for each action I played with I created a test case with 'FullTestMakerLWPCaller.pm'. A good thing as no function tests exists that I can find in PAWS.
Anyway the fist thing I noticed (thanks to Jess https://github.com/castaway who pointed it out) was the boto for the eventual output for this action used something I had not seen before. "eventstream":true
"SubscribeToShardEventStream":{
"type":"structure",
"required":["SubscribeToShardEvent"],
"members":{
"SubscribeToShardEvent":{"shape":"SubscribeToShardEvent"},
"ResourceNotFoundException":{"shape":"ResourceNotFoundException"},
"ResourceInUseException":{"shape":"ResourceInUseException"},
"KMSDisabledException":{"shape":"KMSDisabledException"},
"KMSInvalidStateException":{"shape":"KMSInvalidStateException"},
"KMSAccessDeniedException":{"shape":"KMSAccessDeniedException"},
"KMSNotFoundException":{"shape":"KMSNotFoundException"},
"KMSOptInRequired":{"shape":"KMSOptInRequired"},
"KMSThrottlingException":{"shape":"KMSThrottlingException"},
"InternalFailureException":{"shape":"InternalFailureException"}
},
"eventstream":true
},
Well if I have learned one thing in my PAWS play- dates it is there are very few one-offs in botot. So I had a snooped about and found it in two others S3 which is mentioned in the bug but also in Pinpoint (one that I actually use).
The second thing I have learned it it is far better to work with boto thatn against it. I have seen similar things in the past so the first thing I need to do is look at the 'Paws::Kinesis::SubscribeToShardEventStream' package and see what I have.
package Paws::Kinesis::SubscribeToShardEventStream;
use Moose;
has InternalFailureException => (is => 'ro', isa => 'Paws::Kinesis::InternalFailureException' );
has KMSAccessDeniedException => (is => 'ro', isa => 'Paws::Kinesis::KMSAccessDeniedException' );
has KMSDisabledException => (is => 'ro', isa => 'Paws::Kinesis::KMSDisabledException' );
has KMSInvalidStateException => (is => 'ro', isa => 'Paws::Kinesis::KMSInvalidStateException' );
has KMSNotFoundException => (is => 'ro', isa => 'Paws::Kinesis::KMSNotFoundException' );
has KMSOptInRequired => (is => 'ro', isa => 'Paws::Kinesis::KMSOptInRequired' );
has KMSThrottlingException => (is => 'ro', isa => 'Paws::Kinesis::KMSThrottlingException' );
has ResourceInUseException => (is => 'ro', isa => 'Paws::Kinesis::ResourceInUseException' );
has ResourceNotFoundException => (is => 'ro', isa => 'Paws::Kinesis::ResourceNotFoundException' );
has SubscribeToShardEvent => (is => 'ro', isa => 'Paws::Kinesis::SubscribeToShardEvent' , required => 1);
1;
well nothing to tell me this is an event stream. If you have been folloing along the next thing I will have to do is play with the templates to add in the value I want. I really only need to tell that this call is an event streram and I have done that many times before with the good old Moose attribute 'class_has'
Seeing that this is an end item my best place to add in my code is in the 'default/object.tt' and this is what I did
…
[% END -%]
++[%- IF (shape.eventstream) %]
++ class_has _event_stream => (is => 'ro', default => 1);
++[%- END %]
1;
[% iclass=shape; INCLUDE 'innerclass_documentation.tt' %]
The neat thing this should also work for the other two APIs but we will see later. Frist a quick recompile;
carton exec builder-bin/gen_classes.pl --classes botocore/botocore/data/kinesis/2013-12-02/service-2.json
and now my class has this;
has SubscribeToShardEvent => (is => 'ro', isa => 'Paws::Kinesis::SubscribeToShardEvent', required => 1);
class_has _event_stream => (is => 'ro', default => 1);
1;
So now to use it?
Kinesis uses the 'JsonCaller' and 'JsonResponse' modules so I will be working on those ones but in this post I am really only inserted in 'Response'.
So after a quick poke about in 'lib/Paws/Net/JsonResponse.pm', fortunately I am fairly familiar with this module, I added in this warning near the end of the 'response_to_object' sub;
sub response_to_object {
…
return Paws::API::Response->new(_request_id => $request_id) if (not $returns);
++ warn("Does this do an evnet stream =".$ret_class->does('_event_stream'));
my $unserialized_struct = $self->unserialize_response( $response );
…
}
Now just to run my script and lets see if I am on the right track?
Eventually, after about 5 minutes my script returned a response, this was expected though as the API doc says it keeps the Event Stream open only for that long. The response did return a '200' but what I got back was gobbledygook the even the boys over at CuriousMarc would have a hard time decrypting.
It looked soimething like this
'_content' => 'r`▒▒ :message-typeevent{}▒▒k}▒ejI▒z-json-1.1 :message-typeevent{"ContinuationSequenceNumber":"49604106570538379893614004884743816156465460913031348226","MillisBehindLatest":0,"Records":[]}-▒▒▒ejI▒ :event-typeSub:message-typeevent...…
but I did see a successful line of debuggin as well
Does this do an evnet stream =1
So a few good things
I did take a very close look at was returned and I noticed that the 'content-type' was something called 'application/vnd.amazon.eventstream' So It looks like I have some research to do. Really going to have to put my thinking Pom on for this one
]]>Looking at S3 I have only 1 error with the 09_requestst.t test suite;
ok 829 - Call S3->SelectObjectContent from t/09_requests/s3-select-object-content.request
not ok 830 - Got content eq from request
# Failed test 'Got content eq from request'
# at t/09_requests.t line 123.
# got: '<SelectObjectContentRequest xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><InputSerialization><CompressionType>NONE</CompressionType></InputSerialization><OutputSerialization><CSV><FieldDelimiter>\,</FieldDelimiter><QuoteCharacter>\'</QuoteCharacter><QuoteEscapeCharacter>\"</QuoteEscapeCharacter><QuoteFields>ASNEEDED</QuoteFields><RecordDelimiter>\\n</RecordDelimiter></CSV></OutputSerialization><Expression>MyExpression</Expression><ExpressionType>SQL</ExpressionType></SelectObjectContentRequest>'
# expected: '<InputSerialization><CompressionType>NONE</CompressionType></InputSerialization><OutputSerialization><CSV><FieldDelimiter>\,</FieldDelimiter><QuoteCharacter>\'</QuoteCharacter><QuoteEscapeCharacter>\"</QuoteEscapeCharacter><QuoteFields>ASNEEDED</QuoteFields><RecordDelimiter>\\n</RecordDelimiter></CSV></OutputSerialization>'
I rechecked the API and my real world test and found the action 'SelectObjectContent
' it is a one off case in S3 that requires the root tag and the 'xmlns' to be present;
So a simple fix to the test got that.
The 10_response.t test had a few more problems as I had 10 fails in there I had a problem with the 'MetricsConfiguration' class the tags. The XML back from the server was fine
<Tag>
<Key>priority</Key>
<Value>high</Value>
</Tag>
but my output into the class was off
'And' => bless( {
'Prefix' => 'documents',
'Tags' => []
}, 'Paws::S3::MetricsAndOperator' )
So I checked that generated class MetricsAndOperator; and it looked ok;
has Tags => (is => 'ro', isa => 'ArrayRef[Paws::S3::Tag]', request_name => 'Tag', list_request_name => 'Tag' , traits => ['NameInRequest','ListNameInRequest'] );
I changed that 'ist_request_name' to 'Tags' and I got the correct results so that must mean my template is not in order.
I quick look in object.tt and this
--[%- IF (shape.members.${param_name}.locationName %], list_request_name => '[% shape.members.${param_name}.locationName %]'
++[%- IF (shape.members.${param_name}.locationName != member.member.locationName) %], list_request_name => '[% shape.members.${param_name}.locationName %]'
Fixed that problem. So I did a recompile to see if I broke anything else.
Oh well 314 tests fails on cloudfront on 10_response.t. So something when awry.
Look at one fail I see that it links back to the CloudFrontOriginAccessIdentityList and its items attribute
has Items => (is => 'ro', isa => 'ArrayRef[Paws::CloudFront::CloudFrontOriginAccessIdentitySummary]', request_name => 'CloudFrontOriginAccessIdentitySummary', list_request_name => '' , traits => ['NameInRequest','ListNameInRequest'] );
This time that 'list_request_name ' is present and empty and when I put 'Items' in there things work fine so the template once again;
Well ran the tests again and just playing 'whack a mole' as I make a change in one I kill the other and now I have 140 fails in the 09 suite and . Darn!
Well after playing about with the templates for a good few hours I dropped that line of inquiry and concentrated on what I could do in 'RestXMLResponse.pm' and withing a few mins I found a solution in the 'new_from_result_struct' sub.
--if ($meta->does("ListNameInRequest")){
++if ($meta->does("ListNameInRequest") and $meta->{list_request_name} eq 'Items'){
$result->{$meta->{list_request_name}}= $result->{$meta->
{list_request_name}}->[0]->{$meta->request_name};
}
I did do some hard-coding here but I double checked all the Boto and all those 'Items' and 'ListNameInRequest' line up correctly so I think I can get away with this.
I also took the opportunity to clean up the 'object.tt' template a little and this patch
--[%- IF (member.type == 'list' and member.member.locationName.defined) %][% traits.push('NameInRequest','ListNameInRequest') %], request_name => '[% member.member.locationName %]'
++[%- IF (shape.members.${param_name}.locationName) %], list_request_name => '[% shape.members.${param_name}.locationName %]'
[%- IF (shape.members.${param_name}.locationName != member.member.locationName) %], list_request_name => '[%- IF (!shape.members.${param_name}.locationName.defined) %][%param_name%][% ELSE %][% shape.members.${param_name}.locationName %][% END %]'
[%- ELSE %]
--, list_request_name => '[% param_name %]'
++, list_request_name => '[%member.member.locationName %]'
[% END %]
[%- ELSE %]
Fixed up a few other errors.
I recompiled the differing code bases and I now just fixed up 12 tests in the 09_requests.t suite, and 6 test in the 10_response.t suite and now I am getting 100% pass
I recompiled everything and I now have the full system testing with 100%.
So that is the end I guess. Now I just have to wait for this to get into PAWs. I don''t seem much action on that part and all the emails I sent out to Jose Luis Martinez have bounced back.
Hopefully we can see some action in the near future.
]]>In the end I checked in 30+ new tests cases and over 2k of tests the other day. So I can safely say that 'CloudFront' is fully operational.
That leaves only 'Route53' to look and for me this is somewhat problematic. The Route53 api deals with 'Domains', 'Checks', 'Hosts', 'Traffic' and such. To test 90% of the actions in this API you will need
As I fail on all 4 of the above I am not comfortable with creating working scrips for this API.
I am still going to forge ahead with dummy/invalid data though the actions will most likely fail I can still get the XML correct which is half the battle.
Starting out I quickly ran into a bug with 'ChangeTagsForResource'.
$ListObjectsV2Output = $s3->ChangeTagsForResource(
ResourceId => 'MyTagResourceId',
ResourceType => 'healthcheck',
AddTags => [
{
Key => 'MyTagKey',
Value => 'MyTagValue',
},
{
Key => 'MyTagKey',
Value => 'MyTagValue',
},
],
RemoveTagKeys => [
'MyTagKey', 'Key2'
],
);
The 'RemoveTagKeys' was failing and my '_to_xml' sub in 'RestXMLCaller.pm' dose not handle an array directly. Funny we have yet to run into this so far along with some 300 actions all working.
To first this I had to extend the code to capture just a simple an 'ARRAY' but even with that I needed to traslated that into some sort of XML and on the API doc what I am looking for is;
<RemoveTagKeys>
<Key>string</Key>
</RemoveTagKeys>
so I checked the attribute in the generated class;
has RemoveTagKeys => (is => 'ro', isa => 'ArrayRef[Str|Undef]' );
and unfortunerly there is nothing I can use there to get that 'Key' for my tag. To be 100% sure I checked Boto which led me here
"RemoveTagKeys":{
"shape":"TagKeyList",
and then here
"TagKeyList":{
"type":"list",
"member":{
"shape":"TagKey",
"locationName":"Key"
},
"max":10,
"min":1
},
ah there is something I can play with there but is is rather buried but I at least have that 'locationName'. Another dive into the templates.
So I played in there a bit found this hash was being returned;
{
'min' => 1,
'example_code' => ' [
\'MyTagKey\', ... # max: 128
]',
'max' => 10,
'member' => {
'locationName' => 'Key',
'shape' => 'TagKey'
},
'perl_type' => 'ArrayRef[Str|Undef]',
'type' => 'list'
};
and with this patch to the template
[% IF (shape.members.$param_name.locationName != '' || member.member.locationName != param_name) %]
[%- IF (shape.members.$param_name.locationName == 'x-amz-meta-') %]
+ [%- ELSIF (shape.members.$param_name.locationName and shape.members.$param_name.locationName != param_name); traits.push('NameInRequest'); %], request_name => '[% shape.members.$param_name.locationName %]'
+ [%- ELSIF (member.member.locationName != param_name); traits.push('NameInRequest'); %], request_name => '[% member.member.locationName %]'
[%- END %]
[%- END %]
It kinda worked with this result;
use Moose;
has AddTags => (is => 'ro', isa => 'ArrayRef[Paws::Route53::Tag]', request_name => 'Tag', traits => ['NameInRequest'] );
has RemoveTagKeys => (is => 'ro', isa => 'ArrayRef[Str|Undef]' , request_name => 'Key', traits => ['NameInRequest'] );
Which I think I can work with;
I eventullay got it to work with this patch
if ( ref $attribute_value ) {
my $location =
$attribute->does('NameInRequest')
? $attribute->request_name
: $attribute->name;
if ( $attribute->does('Flatten') ) {
$xml .= $self->_to_xml($attribute_value);
}
elsif ( $call->can('_namspace_uri') ) {
$xml .= sprintf '<%s xmlns="%s">%s</%s>', $location,
$call->_namspace_uri(), $self->_to_xml($attribute_value),
$location;
}
+ elsif (ref($attribute_value) eq 'ARRAY'){
+ my $location = $attribute->name;
+ my $list_name = $attribute->name;
+
+ $location = $attribute->request_name
+ if ( $attribute->can('request_name'));
+ my $temp_xml = (
+ join '',
+ map {
+ sprintf '<%s>%s</%s>', $location,
+ , ref($_) ? $self->_to_xml($_) : $_,
+ $location
+ } @{ $attribute_value }
+ );
+ $temp_xml = "<$list_name>$temp_xml</$list_name>"
+ if ( $location ne $list_name );
+ $xml .= $temp_xml;
}
else {
Lets see If I killed anything else as I did change one of the templates. So a regenration and retests I got
Failed 104/1792 subtests
on the 09_requests.t test case; Ouch!
Most seem to be like this
not ok 1717 - Got content eq from request # Failed test 'Got content eq from request' # at t/09_requests.t line 123. # got: '< xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><TagSet> # expected: '<Tagging xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><TagSet>
Going back a bit I had a look in generated class ' PutPublicAccessBlock' and found that the 'request_name' was empty;
has PublicAccessBlockConfiguration => (is => 'ro', isa => 'Paws::S3::PublicAccessBlockConfiguration'
, request_name => '', traits => ['NameInRequest'] , required => 1);
So something in the template again; That request_name has to be filled in or the trait not applied;
I did a quick change to the template
[%- ELSIF (member.member.locationName != param_name); traits.push('NameInRequest'); %], request_name => '[% member.member.locationName %]'
[%- ELSIF ( member.member.locationName != '' and member.member.locationName != param_name); traits.push('NameInRequest'); %], request_name => '[% ember.member.locationName %]'
to account for that and now I am only getting about 20 errors when I run both 09 and 10 tests suites.
I am not going to worry much about those as at first glance they mostly look like problems with the tests and not the actual code I generated.
I manged to plow though most of the Route53 commands and the all now pass valid XML tests so I think my Paws RestXML API journey is coming to an end.
]]>I got stuck on the 'UpdateCloudFrontOriginAccessIdentity' call.
It seemed simple enough
$s3->UpdateCloudFrontOriginAccessIdentity(
CloudFrontOriginAccessIdentityConfig => {
CallerReference => 'Some text here',
Comment => 'Mr Pooppy buthole did this',
},
Id=> 'E3D5Y5RWA05QO1',
);
but I kept running into this error;
The request failed because it didn't meet the preconditions in one or more request-header fields.
Ok what is that?
After playing about for a long time and re-reading a number of differing API docs I finally figured it out.
I am in one of those cases where I need to follow a set protocol to get the action to work.
First I have to make a call to 'GetCloudFrontOriginAccessIdentity' like this;
$ListObjectsV2Output = $s3->GetCloudFrontOriginAccessIdentityConfig(
Id=>'E3D5Y5RWA05QO1',
);
with the 'Id' I am interested in and that will give me this me this;
bless( {
'_request_id' => '1ee7bc4b-1bc6-4b7e-9a51-16bc4139198c',
'ETag' => 'E2J612BD0LRDHQ',
'CloudFrontOriginAccessIdentityConfig' => bless( {
'CallerReference' => 'some test here',
'Comment' => 'This is Mr Poopy Buthole calling'
},
'Paws::CloudFront::CloudFrontOriginAccessIdentityConfig' )
}, 'Paws::CloudFront::GetCloudFrontOriginAccessIdentityConfigResult' );
Now I have three things
You need all three to update.
So now my call looks like this
ListObjectsV2Output = $s3->UpdateCloudFrontOriginAccessIdentity(
CloudFrontOriginAccessIdentityConfig => {
CallerReference => 'some test here',
Comment => 'Mr Pooppy buthole did this',
},
Id => 'E3D5Y5RWA05QO1',
IfMatch => 'E2J612BD0LRDHQ',
);
and now I am getting this
bless( {
'CloudFrontOriginAccessIdentity' => bless( {
'Id' => 'E3D5Y5RWA05QO1',
'S3CanonicalUserId' => '84...71',
'CloudFrontOriginAccessIdentityConfig' => bless( {
'CallerReference' => 'some test here',
'Comment' => 'Mr Pooppy buthole did this'
},
'Paws::CloudFront::CloudFrontOriginAccessIdentityConfig' )
},
'Paws::CloudFront::CloudFrontOriginAccessIdentity' ),
So all is good.
I did get one more fail with the above again nothing to do with Paws what I did was I had a mismatch between the two ' CallerReference' values. Seems these have to match up or you will get a 'You cannot update the value of CallerReference.' Good to know.
I can see why they did this, AWS is trying to prevent double 'Submits' so you have to like one call up with another. My choice of 'some test here' was not a good one I should of used a hires time-stamp or alike of get a real unique id.
Now onto the 'Delete' action and this time I followed the correct procedure by getting the Identity I wanted to delete and finding out its Etag
and with the call
$s3->DeleteCloudFrontOriginAccessIdentity(
Id=>'EY07SKBZ90C5A',
IfMatch=>'E1DHMR4TNGBY5N'
);
I get a perfect response
$VAR1 = bless( {
'_request_id' => 'ff8462e0-82eb-4e58-8cd5-95276eb535d2'
}, 'Paws::API::Response' )
Things are looking up.
]]>At the moment I am getting 400 errors such as 'InvalidArgument' or 'InvalidOrigin' on the Delete and Create actions as I do not have the proper config on the AWS end for the Creates and for the Deletes as I do not have anything on my AWS account to delete.
Reading though the API documentation is seems there is quite the procedure to actually do some of the actions, for example to invoke the DeleteStreamingDistribution action you have to follow a six pre-steps all of which must pass. So I guess I can forget a quick run on this API
So the plan is to get all the real world scripts written up and then go though the full CRUD actions for each and get them working with a good generated test case for each.
After a few hours of mind-numbingly boring cut and pasting and the odd edit I was ready to really get cracking and the first action I started to look at was 'CreateCloudFrontOriginAccessIdentity '
Now this I know is working as I have played with it before but this time out I am going to have a much closer look at it.
Well when I run it I do get a Status 201 but my returned class is
bless( {
'ETag' => 'E2J612BD0LRDHQ',
'Location' => 'https://cloudfront.amazonaws.com/2019-03-26/origin-access-
identity/cloudfront/E3D5Y5RWA05QO1',
'CloudFrontOriginAccessIdentity' => bless( {
'Id' => '',
'CloudFrontOriginAccessIdentityConfig' => bless( {
'CallerReference' => '',
'Comment' => ''
},
'Paws::CloudFront::CloudFrontOriginAccessIdentityConfig' ),
'S3CanonicalUserId' => ''
}, 'Paws::CloudFront::CloudFrontOriginAccessIdentity' ),
'_request_id' => '3b880846-30c6-11ea-a755-f94346075f98'
}, 'Paws::CloudFront::CreateCloudFrontOriginAccessIdentityResult' );
which is missing most of the attriobute values as my returned XML is
<?xml version="1.0"?> <CloudFrontOriginAccessIdentity xmlns="http://cloudfront.amazonaws.com/doc/2019-03-26/"> <Id>E3D5Y5RWA05QO1</Id> <S3CanonicalUserId>84f47125a87a...1</S3CanonicalUserId> <CloudFrontOriginAccessIdentityConfig> <CallerReference>some test here</CallerReference> <Comment>This is Mr Poopy Buthole Calling</Comment> </CloudFrontOriginAccessIdentityConfig> </CloudFrontOriginAccessIdentity>
Ok back into got old RestXMLResponse.pm to see what is being dropped. My first hunch was the XML was not being parsed correctly and adding a little debugging my suspicions where correct
{
'Location' => {
'Id' => 'E3D5Y5RWA05QO1',
'S3CanonicalUserId' => '84f47125a87a26e...632e70ffcd71',
'xmlns' => 'http://cloudfront.amazonaws.com/doc/2019-03-26/',
'CloudFrontOriginAccessIdentityConfig' => {
'Comment' => 'This is Mr Poopy Buthole calling',
'CallerReference' => 'some test here'
}
}
};
so something is out of whack between the class and the returned XML. Well first place to look is in the Boto.
"CreateCloudFrontOriginAccessIdentityResult":{
"type":"structure",
…
"payload":"CloudFrontOriginAccessIdentity"
},
so the payload of that is 'CloudFrontOriginAccessIdentity' but in the generated class we have
package Paws::CloudFront::CreateCloudFrontOriginAccessIdentityResult;
use Moose;
…
use MooseX::ClassAttribute;
class_has _payload => (is => 'ro', default => 'Location');
1;
So there is the problem. Into the callresult_class.tt template to correct that.
It did not take me long to find my mistake. Seem the other day when I fixed the template I did not notice in the boto that sometimes the name of the param and the payload may not match up. So to fix that I just did this edit;
[%- IF (shape.members.$param_name.streaming == 1) %], traits => ['ParamInBody'][% stream_param = param_name %][% END %]
-- [%- IF (shape.payload) %] [% has_payload=param_name %] [% END %]
++ [%- IF (shape.payload) %] [% has_payload=shape.payload %] [% END %]
[%- IF (c.required_in_shape(shape,param_name)) %], required => 1[% END %]);
and on a recompile I get what I want.
bless( {
'_request_id' => 'e7719890-30ca-11ea-a584-9ba090fa6cd5',
'ETag' => 'E2J612BD0LRDHQ',
'CloudFrontOriginAccessIdentity' => bless( {
'CloudFrontOriginAccessIdentityConfig' => bless( {
'CallerReference' => 'some test here',
'Comment' => 'This is Mr Poopy Buthole calling'
},
'Paws::CloudFront::CloudFrontOriginAccessIdentityConfig' ),
'S3CanonicalUserId' => '84...f',
'Id' => 'E3D5Y5RWA05QO1'
}, 'Paws::CloudFront::CloudFrontOriginAccessIdentity' ),
'Location' => 'https://cloudfront.amazonaws.com/2019-03-26/origin-access-
identity/cloudfront/E3D5Y5RWA05QO1'
}, 'Paws::CloudFront::CreateCloudFrontOriginAccessIdentityResult' );
A good Start for today. But I really want to do things right with this round of changes so I also checked my generated test and I found there was a bug there I should fix; I was getting this
headers: x-amz-request-id: ~
so I am not setting that correctly. A little poking about I noticed there are a few ways the AWS returns the request id so I updated my test generator to reflect this.
I next notice that my request tests where not generating correctly as well as I was getting this
--- CloudFrontOriginAccessIdentityConfig: !!perl/hash:Paws::CloudFront::CloudFrontOriginAccessIdentityConfig CallerReference: some test here Comment: This is Mr Poopy Buthole calling
This one took a little while to figure out but in the end the fix was very simple; The problem was in with the way I was gnerating the above YAML. I was simply coercing my caller class into a hash like this;
my $call_params = {%$call};
The problem with the CloudFront API is there are a large number of embedded objects in most of the call classes and my simple coerce will not work correctly embedded objects.
Fortunately someone must of blundered into this before because in the 'Paws::CloudFront ' class there is a 'to_hash' sub that converts a call into a hash and I can access it directly as I pass that in on the $service param. All I need is is this little patch;
++ my $call_params = $service->to_hash($call);
-- my $call_params = {%$call};
and now I get
--- CloudFrontOriginAccessIdentityConfig: CallerReference: some test here Comment: This is Mr Poopy Buthole calling
I am going to check that new test and when I tried to run it I get;
can't locate Paws/CLOUDFRONT.pm
Another little bug; Seem I need the class name vs the service name. Easy enough;
my @services = split("::",ref($service));
my $test_hash = {
call => $call->_api_call,
service => $services[1],
…
and now my 09_request test passes
ok 1 - Call CloudFront->CreateCloudFrontOriginAccessIdentity from t/09_requests /cloudfront-createcloudfrontoriginaccessidentity.request ok 2 - Got content eq from request ok 3 - Got method eq from request ok 4 - Got Param->key: content-md5 eq from request ok 5 - Got Param->key: content-type eq from request ok 6 - Got Param->key: host eq from request ok 7 - Got Param->key: x-amz-content-sha256 eq from request ok 8 - Got Param->key: CloudFrontOriginAccessIdentityConfig.CallerReference eq from request ok 9 - Got Param->key: CloudFrontOriginAccessIdentityConfig.Comment eq from request ok 10 - Have https://cloudfront.amazonaws.com/2019-03-26/origin-access-identity/cloudfront in the URL ok 11 - Have /2019-03-26/origin-access-identity/cloudfront in the URL ok 12 - Have /2019-03-26/origin-access-identity/cloudfront in the URI 1..12as does the 10_response test;
ok 1 - Call CloudFront->CreateCloudFrontOriginAccessIdentity from t/10_responses/cloudfront-createcloudfrontoriginaccessidentity.response
ok 2 - Got CloudFrontOriginAccessIdentity.CloudFrontOriginAccessIdentityConfig.CallerReference eq some test here from result
ok 3 - Got CloudFrontOriginAccessIdentity.CloudFrontOriginAccessIdentityConfig.Comment eq This is Mr Poopy Buthole calling from result
ok 4 - Got CloudFrontOriginAccessIdentity.Id eq E3D5Y5RWA05QO1 from result
ok 5 - Got CloudFrontOriginAccessIdentity.S3CanonicalUserId eq 84f47125a87a26ea5ba42f3be65fbefebdb7440d82e7d27907c52c969ac4f6c05ef03046db8cd6f74dab632e70ffcd71 from result
ok 6 - Got ETag eq E2J612BD0LRDHQ from result
ok 7 - Got Location eq https://cloudfront.amazonaws.com/2019-03-26/origin-access-identity/cloudfront/E3D5Y5RWA05QO1 from result
ok 8 - Got _request_id eq 5459351b-bb65-414e-9696-df581ea8b373 from result
1..8
so now onto the next call.