Can anyone tell me how can I write a script to automate results downloads from a page.
You'll want to look into the WWW::Mechanize module.
Install Perl, preferably using perlbrew:
then install cpanm:
then install HTTP::Tiny:
shell> cpanm HTTP::Tiny
then copy the program in the synopsis at:
demo and edit it to suit
then run it.
What kind of document are you downloading? One tried-and-true method is to use LWP; however, the Mojo::UserAgent module and its associated modules are catching on as well. Personally, I prefer LWP for most tasks, however, I use Mojo::UserAgent whenever I need to interact with the DOM of an HTML page. I'm thinking that as I become more familiar with it, I will prefer Mojo::UserAgent for more tasks.
There are a lot of modules you might consider -- I'm currently working on a review of modules for making HTTP requests (more complex solutions for browsing / crawling not covered). So far I've got 19 entries.
If you'd like to be a guinea pig, I'll give you early access to it -- email me at neilb at cpan dot org.
If the remote file doesn't change very often, and you want to store your own local copy of it, look at the mirror() method in HTTP::Tiny.
If you're just want to GET a remote URL, and the results are likely to be different every time you get them, have a look at Net::HTTP::Tiny. It's new, but is lightweight. It doesn't support https yet though.
I'm crazy about perl. Want to learn few things.