I have found an alternate solution to above problem, instead of getting contents from aws and then buffering large amount of data to browser, i created "Public" url from the file and redirect user to the link.
// Getting the URL to an object
$url = $s3->getObjectUrl($targetBucket, $keyname);
// redirect user to the download link
header('Location: '.$url); exit;
If "Private" bucket and files, we can create a temporary bucket and copy the file to that bucket on request, like something below, and then again create link like above.
// temporary bucket for public files
$sourcebucket = '........';
$targetBucket = '........';
$keyname = '........';
// Copy an object.
$s3->copyObject([
'Bucket' => $targetBucket,
'Key' => "{$keyname}",
'CopySource' => "{$sourcebucket}/{$keyname}",
]);
$s3->putObjectAcl(array(
'Bucket' => $targetBucket,
'Key' => $keyname,
'ACL' => 'public-read'
));
I posted here, may be it will help other's. As i was tired for solution! :)
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…