Laravel Integration
Laravel Integration
Ploi Cloud object storage is fully S3-compatible, making it easy to integrate with Laravel's built-in filesystem.
Prerequisites
Before configuring Laravel, ensure you have:
- Created an object storage instance
- Created at least one bucket
- Created a user with an access key
Required Package
Laravel's S3 driver requires the AWS SDK. Install it via Composer:
composer require league/flysystem-aws-s3-v3 "^3.0"
Environment Configuration
Add the following variables to your .env file:
FILESYSTEM_DISK=s3
AWS_ACCESS_KEY_ID=your-access-key-id
AWS_SECRET_ACCESS_KEY=your-secret-access-key
AWS_DEFAULT_REGION=nl-ams
AWS_BUCKET=your-bucket-name
AWS_ENDPOINT=https://your-storage-id.ploi-cloud-storage.com
AWS_USE_PATH_STYLE_ENDPOINT=true
Replace the placeholder values:
your-access-key-id- Your access key ID from the Users tabyour-secret-access-key- Your secret access keyyour-bucket-name- The name of your bucketyour-storage-id.ploi-cloud-storage.com- Your storage endpoint URL
Filesystem Configuration
Laravel's default config/filesystems.php should work out of the box. The S3 disk configuration uses these environment variables automatically:
's3' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION'),
'bucket' => env('AWS_BUCKET'),
'url' => env('AWS_URL'),
'endpoint' => env('AWS_ENDPOINT'),
'use_path_style_endpoint' => env('AWS_USE_PATH_STYLE_ENDPOINT', false),
'throw' => false,
],
Basic Usage
Once configured, use Laravel's Storage facade to interact with your object storage:
Storing Files
use Illuminate\Support\Facades\Storage;
// Store a file from a request
$path = Storage::put('avatars', $request->file('avatar'));
// Store with a specific filename
Storage::putFileAs('avatars', $request->file('avatar'), 'user-123.jpg');
// Store raw content
Storage::put('documents/readme.txt', 'Hello World');
Retrieving Files
// Get file contents
$contents = Storage::get('documents/readme.txt');
// Check if file exists
if (Storage::exists('avatars/user-123.jpg')) {
// File exists
}
// Get file URL
$url = Storage::url('avatars/user-123.jpg');
Temporary URLs
Generate pre-signed URLs for temporary access to private files:
// URL valid for 30 minutes
$url = Storage::temporaryUrl(
'documents/private-file.pdf',
now()->addMinutes(30)
);
Deleting Files
// Delete a single file
Storage::delete('documents/old-file.txt');
// Delete multiple files
Storage::delete(['file1.txt', 'file2.txt']);
Listing Files
// List all files in a directory
$files = Storage::files('documents');
// List all files recursively
$files = Storage::allFiles('documents');
// List directories
$directories = Storage::directories('uploads');
File Uploads in Controllers
Here's a complete example of handling file uploads:
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use Illuminate\Support\Facades\Storage;
class AvatarController extends Controller
{
public function store(Request $request)
{
$request->validate([
'avatar' => 'required|image|max:2048',
]);
$path = Storage::put('avatars', $request->file('avatar'));
auth()->user()->update(['avatar_path' => $path]);
return back()->with('success', 'Avatar uploaded successfully');
}
public function show()
{
$url = Storage::temporaryUrl(
auth()->user()->avatar_path,
now()->addHour()
);
return redirect($url);
}
}
Multiple Disks
If you need to use multiple buckets or storage instances, define additional disks in config/filesystems.php:
'disks' => [
// ... existing disks
'backups' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION'),
'bucket' => 'backups', // Different bucket
'endpoint' => env('AWS_ENDPOINT'),
'use_path_style_endpoint' => true,
'throw' => false,
],
],
Then use the disk explicitly:
Storage::disk('backups')->put('database.sql', $dump);
Troubleshooting
"Access Denied" errors:
- Verify your access key ID and secret are correct
- Check that the user has access to the bucket
- Ensure the bucket name matches exactly
"Bucket not found" errors:
- Double-check the bucket name in your
.envfile - Ensure the bucket exists in your object storage instance
Connection errors:
- Verify the endpoint URL is correct
- Check that
AWS_USE_PATH_STYLE_ENDPOINTis set totrue