perl - Transform callbacks into a stream -


in perl, how can 1 transform function requires callbacks new function returns stream of results?

image have fixed function can't change:

sub my_fixed_handler {     $callback = shift;      $count = 1;     while(1) {        $callback->($count++);     } } 

to print count of numbers write code:

my_fixed_handler( sub {     $num = shift;     print "...$num\n"; }); 

but need function based on on my_fixed_handler return result of 1 calculation step:

my $stream = my_wrapper( my_fixer_hander( ... ) ) ; $stream->next;  # 1 $stream->next;  # 2 

is possible?

use fact pipe blocks when full: run fixed_handler in forked process, callback writing parent via pipe. while parent processing after read pipe blocked if full , writer waiting. facilitate write empty string fill pipe.

use warnings; use strict; use feature 'say';  sub fixed_handler {     $callback = shift;     #state $count = 1; # solve problem     $count = 1;     (1..4) { $callback->($count++) }    }  pipe $reader, $writer  or die "can't open pipe: $!"; $writer->autoflush(1); $reader->autoflush(1);  $fill_buff = ' ' x 100_000;  # (64_656 - 3); # see text  $iter = sub {      $data = shift;      "\twrite on pipe ... ($data)";     $writer $data;     $writer $fill_buff;     # (over)fill buffer };  $pid = fork // die "can't fork: $!";  #/  if ($pid == 0) {     close $reader;     fixed_handler($iter);     close $writer;     exit; }  close $writer; "parent: started kid $pid";  while (my $recd = <$reader>) {     next if $recd !~ /\s/;      # throw out filler     chomp $recd;     "got: $recd";     sleep 1; }  $gone = waitpid $pid, 0; if    ($gone > 0) { "child $gone exited with: $?" } elsif ($gone < 0) { "no such process: $gone" } 

output

 parent: started kid 13555         write on pipe ... (1) got: 1         write on pipe ... (2) got: 2         write on pipe ... (3) got: 3         write on pipe ... (4) got: 4 child 13555 exited with: 0 

at first writer keep printing until fills buffer. then, reader gets 1 line, writer puts (or two, if prints' length vary), etc. if ok, remove say $writer $fill_buff;. in output see write on pipe lines first, parent's prints go. common buffer size nowadays 64k.

however, told each step of file_handler takes time , we'd wait thousands of such steps before starting processing in parent (depending on size of each write), until buffer fills , writer starts getting blocked @ each read.

one way out of write string, long enough fill buffer, , dismiss in reader. found finicky exact length though. one, buffer found in program by

my $cnt; while (1) { ++$cnt; print $writer ' '; print "\r$cnt" } # reader sleeps 

differs 1 found on command line in similar way. still (sometimes) "double writes." while that may not show stopper, went 100k make sure fill it.

see this post discussion of buffer sizes, example.

another way may set pipe's buffer size using io::handle::setvbuf. however, ran "not implemented on architecture" (on production machines) , wouldn't consider that.

messing buffering of course slow down communication lot.

this implements idea melpomene's comments.


Comments

Popular posts from this blog

php - Vagrant up error - Uncaught Reflection Exception: Class DOMDocument does not exist -

vue.js - Create hooks for automated testing -

Add new key value to json node in java -