Tag: Best Practices

  • Structuring FastAPI Projects for Maintainability and Scalability

    Structuring FastAPI Projects for Maintainability and Scalability

    Hi, I’m Fast Eddy! As a backend web developer who spends most of my time with FastAPI, I’ve learned that maintainable, scalable project structure is the foundation of any robust API. Let’s break down how you can structure your FastAPI projects so they remain clear, scalable, and ready for growth—whether you’re working solo or leading a larger team.

    Why Project Structure Matters

    FastAPI makes it super easy to spin up endpoints fast. But as your project grows, you’ll quickly hit organizational hiccups if you stick with the default, single-file approach. Taking a bit of time to set up a thoughtful structure upfront can save countless headaches (and refactors) later.

    Common Project Structure

    Let’s look at a directory layout that works well for most mid- to large-sized projects:

    myfastapiapp/
    │
    ├── app/
    │   ├── __init__.py
    │   ├── main.py            # Application entrypoint
    │   ├── api/               # Routers and API logic
    │   │   ├── __init__.py
    │   │   └── v1/
    │   │       ├── __init__.py
    │   │       ├── endpoints/
    │   │       │   ├── __init__.py
    │   │       │   ├── users.py
    │   │       │   └── items.py
    │   │       └── dependencies.py
    │   ├── core/              # Core app configs, settings, security, etc.
    │   │   ├── __init__.py
    │   │   ├── config.py
    │   │   └── security.py
    │   ├── models/            # Pydantic and ORM models
    │   │   ├── __init__.py
    │   │   ├── user.py
    │   │   └── item.py
    │   ├── crud/              # CRUD operations, DB logic
    │   │   ├── __init__.py
    │   │   ├── user.py
    │   │   └── item.py
    │   ├── db/                # Database session, metadata
    │   │   ├── __init__.py
    │   │   └── session.py
    │   └── utils/             # Utility/helper functions
    │       ├── __init__.py
    │       └── email.py
    ├── tests/
    │   ├── __init__.py
    │   ├── test_users.py
    │   └── test_items.py
    ├── alembic/               # Database migrations (if using Alembic)
    ├── .env
    ├── requirements.txt
    ├── Dockerfile
    └── README.md
    

    Key Principles

    1. Separation of Concerns:

    • Keep endpoints, business logic, and data models in their own modules.
    • Avoid massive files that conglomerate unrelated logic.

    2. API Versioning:

    • Nest endpoints under /api/v1/ (and so on) directories from the start. This simplifies future upgrades and backward compatibility.

    3. Dependency Injection:

    • Place shared dependencies (auth, DB sessions, etc.) in dedicated dependencies.py files.

    4. Config Management:

    • House settings and startup logic in core/ for better organization and single sourcing of truth (read more in my previous article: "Managing Environment Variables in FastAPI Applications").

    5. Utility Layer:

    • Utilities or helpers go in a utils/ folder to keep business logic clean.

    Practical Example: Adding a User Endpoint

    Suppose you want to add /users endpoints. Here’s what you’d do:

    • Create a users.py file under both models/ and crud/ for the schemas and CRUD functions.
    • Add endpoint logic to api/v1/endpoints/users.py.
    • Route all /users requests from your main router file (api/v1/__init__.py).
    • Wire up dependencies in api/v1/dependencies.py as needed.

    Scaling Up

    This structure supports the addition of:

    • More teams or developers (clear file boundaries help code reviews and onboarding)
    • New API versions
    • Plugins or third-party integrations
    • Automated testing in the tests/ directory

    Summary

    Properly structuring your FastAPI application sets you up for easier maintenance, painless scaling, and happier developers. A clear directory layout is the unsung hero of a project’s success.

    Have your own best practices or run into structure headaches recently? Share your thoughts below or ping me on Dev Chat!

    — Fast Eddy

  • Squashing Commits in Git: Cleaning Up Your Project History

    Squashing Commits in Git: Cleaning Up Your Project History

    If you’ve ever ended up with a heap of noisy, work-in-progress (WIP) commits after a coding sprint, you know how messy a project’s commit history can get. Maintaining a clean, readable Git log is critical—especially for collaborative work and open source contributions. Today, I want to walk you through the powerful Git technique of "squashing commits," helping you present a tidy project history without losing important changes.

    Why Squash Commits?

    Squashing is the act of combining multiple consecutive commits into one. This is often used during a Git rebase to clean up a feature branch before merging it into main. It’s particularly helpful for:

    • Reducing noise: Fewer, more meaningful commits make it easier to track project history.
    • Improving clarity: Each squashed commit can reflect a well-defined change or feature.
    • Facilitating code reviews: Reviewers appreciate concise, logical changesets.

    How to Squash Commits

    There are a few ways to squash commits in Git, but the most common method is interactive rebase. Here’s a quick guide:

    1. Start Interactive Rebase

    git rebase -i HEAD~N
    

    Replace N with the number of commits you want to squash (e.g., HEAD~3 for the last 3 commits).

    2. The Interactive Screen

    Your default editor will open a list of the last N commits:

    pick 1a2b3c4 Add initial feature blueprint
    pick 2b3c4d5 Implement feature logic
    pick 3c4d5e6 Fix bug in feature
    

    Change all but the first pick to squash or simply s:

    pick 1a2b3c4 Add initial feature blueprint
    squash 2b3c4d5 Implement feature logic
    squash 3c4d5e6 Fix bug in feature
    

    3. Write a Commit Message

    Git will now prompt you to update the commit message for your new, squashed commit. You can combine all messages, summarize, or write a new concise description.

    4. Complete the Rebase

    Save and close the editor. Git will process the rebase and squash your selected commits into one.

    Best Practices for Squashing Commits

    • Communicate with your team before rebasing shared branches—rewriting history can impact collaborators.
    • Squash in feature branches before merging into main/trunk.
    • Use --autosquash with rebase if you’ve used fixup! or squash! commit prefixes.

    Wrapping Up

    Squashing commits is an essential Git technique for any developer seeking a clean and understandable history. It’s easy to adopt into your workflow and will vastly improve your team’s experience during code reviews and when tracing changes.

    Want more advanced tips? Check out my other articles on Git workflows and histories—let’s keep our repos as clean as our code!

    Happy coding!

    — Joe Git

  • Integrating Automation in WordPress: A Guide to Action Scheduler

    Integrating Automation in WordPress: A Guide to Action Scheduler

    As WordPress grows from a simple blogging tool to a robust content management system powering dynamic sites, automation has become essential for developers aiming to optimize workflows and site interactivity. In this article, I’ll explore Action Scheduler—WordPress’s answer to reliable background processing—and show you how to leverage it for common automation tasks.

    What is Action Scheduler?

    Action Scheduler is a scalable job queue for WordPress, originally developed for WooCommerce, and now available for broader plugin and site development through the action-scheduler library. Unlike WP-Cron, which schedules PHP callbacks based on visitor traffic, Action Scheduler uses a database-backed queue, making it more suitable for reliably managing large or recurring background tasks.

    Why Use Action Scheduler?

    • Reliability: Handles thousands of queued actions without overwhelming your server.
    • Scalability: Powers large e-commerce sites and sophisticated plugin logic.
    • Flexibility: Trigger recurring or one-time custom tasks based on your needs.

    Getting Started

    To use Action Scheduler, you can either:

    • Require it as a dependency in your custom plugin (composer require woocommerce/action-scheduler), or
    • Leverage plugins like WooCommerce which bundle it by default.

    Let’s look at a basic example—sending a weekly custom email digest.

    Step 1: Schedule a Recurring Action

    if ( ! as_next_scheduled_action( 'send_weekly_digest' ) ) {
        as_schedule_recurring_action( strtotime('next monday'), WEEK_IN_SECONDS, 'send_weekly_digest' );
    }
    

    Step 2: Hook Your Custom Function

    add_action( 'send_weekly_digest', function () {
        // Retrieve posts, build email content, and send to users
    } );
    

    It’s that simple! You can queue one-off events with as_enqueue_async_action, process data imports in the background, or integrate with third-party APIs—without blocking the WordPress UI or risking timeouts.

    Best Practices for Action Scheduler

    • Monitor the Queue: Use the WP Admin interface (Tools > Scheduled Actions) for visibility.
    • Error Handling: Include logging and exception handling to capture failures.
    • Site Performance: Space out heavy tasks and test on staging before deploying.

    When Should You Not Use Action Scheduler?

    Avoid using Action Scheduler for real-time user-facing functionality. It’s designed for background processing and is not immediate.

    Conclusion

    Whether you’re maintaining a bustling WooCommerce store or building custom plugins, Action Scheduler is a modern automation solution every WordPress developer should have in their toolkit. It unlocks a new level of reliability and power for background jobs, paving the way for smarter, more responsive WordPress sites.

    Happy automating!

    —Presley

  • Efficient Log Analysis on Apache Web Servers Using the Command Line

    Efficient Log Analysis on Apache Web Servers Using the Command Line

    As a Linux server administrator, keeping track of your Apache Web Server’s activity and performance is essential. Apache’s robust logging facilities (access and error logs) can hold crucial information about visitor traffic, possible attacks, and performance bottlenecks. But those log files can grow massive — so reading them efficiently from the command line is a must-have skill for every sysadmin. In this article, I’ll run through some of the most effective command-line techniques for analyzing Apache logs.

    Locating Apache Log Files

    By default, Apache keeps log files in /var/log/apache2/ (Debian/Ubuntu) or /var/log/httpd/ (CentOS/RHEL). Typical files are:

    • access.log: Every request to your server.
    • error.log: Errors and diagnostic messages.

    Basic Log Viewing

    To check the most recent log entries:

    tail -n 50 /var/log/apache2/access.log
    

    The above displays the last 50 lines. To watch updates in real time (e.g., as traffic comes in):

    tail -f /var/log/apache2/access.log
    

    Filtering Log Entries

    Let’s say you’re concerned about a particular IP or URL. You can filter log entries like so:

    grep "203.0.113.42" /var/log/apache2/access.log
    

    Or, to find out which URLs were most requested:

    awk '{print $7}' /var/log/apache2/access.log | sort | uniq -c | sort -nr | head -20
    

    This command breaks down as follows:

    • awk '{print $7}' extracts the request path.
    • sort | uniq -c groups and counts each URL.
    • sort -nr sorts them by popularity.
    • head -20 shows the top 20.

    Spotting Errors Quickly

    Error logs are invaluable for debugging. To see the last few error messages:

    tail -n 100 /var/log/apache2/error.log
    

    To find all lines containing “segfault” (a sign of a potentially serious bug):

    grep segfault /var/log/apache2/error.log
    

    Summarizing Traffic by Status Code

    Want a quick traffic health-check? This command shows the most common HTTP responses:

    awk '{print $9}' /var/log/apache2/access.log | sort | uniq -c | sort -nr
    

    The $9 field is HTTP status (e.g., 200, 404, etc.).

    Advanced: Combining Tools for Insight

    You can chain commands for deeper insights. For example, to see which IPs are generating the most 404 (Not Found) errors:

    grep ' 404 ' /var/log/apache2/access.log | awk '{print $1}' | sort | uniq -c | sort -nr | head
    

    Tips for Handling Huge Logs

    • Consider using zcat, zgrep, or zless on rotated and compressed logs (ending in .gz).
    • Use sed or awk to extract date ranges or fields if your logs get enormous.

    Mastering these command-line techniques will make you more efficient at troubleshooting, spotting anomalies, and understanding visitor patterns. Apache’s logs are a goldmine — and with the CLI, you’ve got the right pickaxe.

    Happy logging!

    Lenny

  • Demystifying Git Clean: Safely Tidying Up Your Working Directory

    Demystifying Git Clean: Safely Tidying Up Your Working Directory

    When working on complex projects, it’s easy for your Git working directory to accumulate a lot of unnecessary files—build artifacts, temporary logs, and experiment leftovers. If you’ve ever wondered how to quickly clean things up without accidentally losing important work, Git’s git clean command is here to help. In this article, I’ll walk you through how git clean works, how to use it responsibly, and a few pro tips to keep your project environment tidy.

    What Does git clean Do?

    Put simply, git clean removes untracked files and directories from your working directory. These are files that are not being tracked by Git (i.e., files not listed in your index or .gitignore). This can be a lifesaver when you want to get back to a pristine state.

    Basic Usage

    The simplest usage is:

    git clean -n
    

    This does a “dry run”—it lists the files that would be removed, without deleting anything. Always start with this!

    To actually remove untracked files:

    git clean -f
    

    If you want to remove untracked directories as well:

    git clean -fd
    

    Combine with the -x flag if you want to also remove files ignored by .gitignore:

    git clean -fdx
    

    Be very careful with -x. You can lose local config files and other important ignored files.

    Pro Tips for Safe Cleaning

    1. Always Use Dry Run First: Run git clean -n (or --dry-run) to see what will happen before you actually delete anything.
    2. Be Specific: Use git clean -f path/to/file to remove only certain files or folders.
    3. Integrate with Your Workflow: Combine it with git stash or git reset --hard to completely revert your repo to the last committed state.

    Common Use Cases

    • Build Artifacts: Get rid of untracked binaries and compiled files before a new build.
    • Experimentation: Clean up temporary files after testing out new ideas.
    • PR Preparation: Tidy your repo before submitting a pull request.

    Conclusion

    git clean is a powerful command to keep your repository organized, but with great power comes great responsibility. Always double-check what you’re deleting and, when in doubt, back up important files. With these tips, you can work more confidently and maintain a clean development environment—one less thing to worry about!

    Happy coding!

    – Joe Git

  • Advanced Angular Routing: Lazy Loading with Route Guards and Resolvers

    Advanced Angular Routing: Lazy Loading with Route Guards and Resolvers

    Angular’s powerful router makes building single page applications seamless, but once your application grows, optimizing routes becomes vital for performance and maintainability. In this article, we’ll delve into intermediate and advanced Angular routing concepts: lazy loading modules, using route guards to protect routes, and leveraging resolvers to fetch data before navigation.

    Why Lazy Loading?

    As Angular applications scale, the bundle size increases, which affects initial load speed. Lazy loading allows us to load feature modules only when needed. This reduces the initial bundle size and speeds up the application startup.

    Setting Up Lazy Loading

    Suppose we have a feature module AdminModule. To lazy load it, our app routing looks like:

    const routes: Routes = [
      { path: 'admin', loadChildren: () => import('./admin/admin.module').then(m => m.AdminModule) }
    ];
    

    When users navigate to /admin, Angular fetches the module on demand.

    Adding Route Guards

    Sensitive routes like /admin may require authentication. We use route guards such as CanActivate to protect them:

    auth.guard.ts

    @Injectable({ providedIn: 'root' })
    export class AuthGuard implements CanActivate {
      constructor(private authService: AuthService, private router: Router) {}
      canActivate(): boolean {
        if (this.authService.isLoggedIn()) {
          return true;
        }
        this.router.navigate(['/login']);
        return false;
      }
    }
    

    Then, in your module’s routing:

    {
      path: '',
      component: AdminComponent,
      canActivate: [AuthGuard]
    }
    

    Data Pre-Fetching with Resolvers

    Sometimes you want to ensure data is available before route activation. This is where resolvers shine.

    admin.resolver.ts

    @Injectable({ providedIn: 'root' })
    export class AdminResolver implements Resolve<AdminData> {
      constructor(private adminService: AdminService) {}
      resolve(route: ActivatedRouteSnapshot): Observable<AdminData> {
        return this.adminService.getAdminData();
      }
    }
    

    Apply it to your routes:

    {
      path: '',
      component: AdminComponent,
      resolve: { adminData: AdminResolver },
      canActivate: [AuthGuard]
    }
    

    Now, AdminComponent receives the resolved data:

    constructor(private route: ActivatedRoute) {
      this.route.data.subscribe(data => {
        this.adminData = data['adminData'];
      });
    }
    

    Key Takeaways

    • Lazy loading optimizes performance by loading modules on demand.
    • Route guards enhance security by controlling access to routes.
    • Resolvers fetch and supply route data before rendering, ensuring a smoother user experience.

    Mastering these Angular routing features leads to more efficient, secure, and user-friendly applications.

  • Beginner’s Guide to Angular Routing

    Beginner’s Guide to Angular Routing

    Routing is a fundamental part of building single-page applications (SPAs) with Angular. It lets you navigate between different views or components, enabling a smooth and dynamic user experience. This guide will walk you through the basics of Angular routing so you can get started adding navigation to your Angular apps!

    What is Routing in Angular?

    Angular routing allows you to display different components or views based on the URL in the browser, without reloading the entire page. Each route maps a URL path to a component.

    Setting Up Routing

    1. Create a New Angular App (if needed):

      ng new my-routing-app
      cd my-routing-app
      
    2. Generate Components:

      ng generate component home
      ng generate component about
      ng generate component contact
      
    3. Configure the Router:
      Open app-routing.module.ts (or create it via ng generate module app-routing --flat --module=app if it doesn’t exist) and define your routes:

      import { NgModule } from '@angular/core';
      import { RouterModule, Routes } from '@angular/router';
      import { HomeComponent } from './home/home.component';
      import { AboutComponent } from './about/about.component';
      import { ContactComponent } from './contact/contact.component';
      
      const routes: Routes = [
        { path: '', component: HomeComponent },
        { path: 'about', component: AboutComponent },
        { path: 'contact', component: ContactComponent },
      ];
      
      @NgModule({
        imports: [RouterModule.forRoot(routes)],
        exports: [RouterModule]
      })
      export class AppRoutingModule {}
      
    4. Enable Router in Your App:
      In app.module.ts, import the AppRoutingModule.

      import { AppRoutingModule } from './app-routing.module';
      // Add AppRoutingModule to the imports array
      
    5. Add Router Outlet:
      In app.component.html (or your root component), add:

      <nav>
        <a routerLink="">Home</a> |
        <a routerLink="/about">About</a> |
        <a routerLink="/contact">Contact</a>
      </nav>
      <router-outlet></router-outlet>
      

      The <router-outlet> directive is where Angular displays the routed component.

    Try It Out!

    Run your app with ng serve, and click the navigation links to see different components render without a full page reload.

    More Routing Features

    • Route Parameters: For dynamic routes (e.g., user profiles) use :id in paths.
    • Wildcard Routes: { path: '**', component: NotFoundComponent } for 404 pages.
    • Route Guards: Control access to certain routes.

    Conclusion

    Angular routing is powerful but easy to get started with. Defining routes, linking to them, and displaying components based on the URL are at the core of building any Angular SPA. Experiment with different features as you get more comfortable!

    Happy coding!

  • Rate Limiting Strategies in FastAPI: Protecting Your API from Abuse

    Rate Limiting Strategies in FastAPI: Protecting Your API from Abuse

    Hi everyone! Fast Eddy here. Today, I’m tackling an important topic that every API developer needs to consider: how to implement rate limiting in FastAPI applications. Without proper rate limiting, your API could be susceptible to abuse, accidental overload, or even denial-of-service attacks. In this article, I’ll explore some effective strategies and walk through a practical implementation so your FastAPI APIs stay performant and healthy.

    Why Rate Limit?

    Rate limiting restricts how often a client can call your API endpoints during a given timeframe. Reasons to implement rate limits include:

    • Protection from abuse: Prevent malicious users from flooding your API.
    • Fairness: Allocate resources equitably among all users.
    • Capacity management: Keep backend services stable and responsive.

    Common Rate Limiting Strategies

    1. Fixed Window: Allows a certain number of requests per fixed period (e.g., 100 requests per minute).
    2. Sliding Window: Offers a smoother rate limit calculation by sliding the window over smaller intervals.
    3. Token Bucket/Leaky Bucket: Provides more flexibility, letting traffic burst up to a point while maintaining an average rate.

    For most use-cases, the fixed window is simple and effective, so let’s see how to implement it in FastAPI.

    Implementing Fixed Window Rate Limiting with FastAPI

    While FastAPI doesn’t have built-in rate limiting, you can easily add it with middleware or dependencies. For demonstration, I’ll use the popular slowapi library, which integrates seamlessly with FastAPI.

    Install Required Packages

    pip install slowapi
    

    Basic Setup

    from fastapi import FastAPI, Request
    from slowapi import Limiter, _rate_limit_exceeded_handler
    from slowapi.util import get_remote_address
    from slowapi.errors import RateLimitExceeded
    
    app = FastAPI()
    limiter = Limiter(key_func=get_remote_address)
    app.state.limiter = limiter
    app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler)
    
    @app.get("/resource")
    @limiter.limit("5/minute")  # Limit to 5 requests per minute per IP
    async def resource(request: Request):
        return {"message": "You have access!"}
    

    Key Points:

    • The Limiter uses the client’s remote address by default.
    • The @limiter.limit decorator specifies the allowed rate (e.g., 5 requests per minute).
    • If a client exceeds the limit, a 429 Too Many Requests response is returned.

    Customizing Limits

    You can also set different limits for different endpoints, or even impose user-specific limits if you have authentication and unique identifiers.

    @limiter.limit("10/minute;100/day", key_func=lambda request: request.headers['X-API-KEY'])
    def user_specific(request: Request):
        # Get user from API key, for example
        ...
    

    Best Practices

    • Communicate limits: Include proper headers (like Retry-After) to inform clients of rate limits.
    • Backed by external storage: For multiple server instances, persist rate limit counters in Redis or similar.
    • Monitoring: Log and monitor rate limit events for insights and security.

    Conclusion

    Rate limiting is a must-have for any production API. With FastAPI and helpful tools like slowapi, implementation is straightforward. Protect your service, ensure fairness, and keep things running smoothly!

    Let me know your favorite approach to rate limiting, or if you have other FastAPI tips you want to see next!

    Happy coding,

    Fast Eddy

  • Mastering the ‘top’ Command: Tips for Efficient Linux Server Monitoring

    Mastering the ‘top’ Command: Tips for Efficient Linux Server Monitoring

    When it comes to monitoring the health and performance of your Linux servers, the "top" command is often one of the first tools in an administrator’s arsenal. It provides a real-time, dynamic view of what’s happening on your system, including which processes are consuming the most resources and overall system load. Yet, many users only scratch the surface of what "top" can do. This article explores some practical tips and advanced usage that can help you get the most out of the "top" command.

    Basic Usage

    Simply typing top in your terminal brings up a continually updating table of processes. Here you’ll see columns for PID, user, CPU and memory usage, and more. The header section shows system uptime, load averages, and summary information about memory and processes.

    Navigating and Customizing the Display

    • Sorting by column: You can change how processes are sorted. Press P to sort by CPU usage or M to sort by memory usage. For other columns, press Shift + <column key> and watch the table update accordingly.
    • Changing update interval: Press d and enter a new number of seconds to set the screen refresh rate. A longer interval can lessen system load on heavily used servers.
    • Filtering processes: Hit o (lowercase letter o), then type a filter (e.g., USER=apache to see only apache processes).
    • Killing a process: Press k, type the PID of the process, and then the signal (usually 15 for gracefully terminating, or 9 for forcefully ending).

    Useful Command-Line Options

    • Display specific user’s processes: top -u someuser
    • Show only processes with high resource use: Combine with grep or use interactive filters in "top" to focus on processes hogging resources.

    Saving Custom Options

    You can customize the top interface (like adjusting columns and sorting), then press W (capital w) to save your preferred configuration for future sessions.

    Advanced Tips

    • Batch mode (for logs and scripting): top -b -n 1 > top-output.txt runs top in batch mode, which is useful for logging system state or integrating into other scripts.
    • Highlighting active processes: Press z to toggle color highlighting of the most active processes.
    • Tree view: Press V to view the processes in a hierarchical tree mode, showing parent/child relationships.

    Conclusion

    The "top" command is a foundational monitoring tool for Linux server administrators. By mastering its interactive features, command-line options, and customizations, you can gain critical insights into your server’s health and performance—ensuring your hosted web sites and services run smoothly.

    Whether you’re a beginner or a seasoned sysadmin, spending some time with "top" can make all the difference in proactive server management.

  • Securing Apache Web Server: Essential Command-Line Techniques

    Securing Apache Web Server: Essential Command-Line Techniques

    When it comes to hosting web sites on Linux servers, security is always a top priority. While Apache is a robust and reliable web server, its security out-of-the-box typically needs enhancement to withstand modern threats. In this article, I’ll walk you through essential command-line techniques to secure your Apache installation and reduce potential attack surfaces, drawing on my experience managing Linux-based web hosting environments.

    1. Keep Apache and Dependencies Updated

    Running outdated software is a common vulnerability. Update your Apache installation and its dependencies with:

    sudo apt update && sudo apt upgrade apache2   # Debian/Ubuntu
    sudo yum update httpd                        # CentOS/RedHat
    

    Automate this with unattended-upgrades or a systemd timer (see my article on systemd timers for more details).

    1. Disable Unused Apache Modules

    Apache has a modular architecture. Only load what you need:

    sudo apache2ctl -M                      # List enabled modules
    sudo a2dismod autoindex                 # Example for Debian/Ubuntu
    

    After disabling, reload:

    sudo systemctl reload apache2
    

    On RHEL/CentOS, you may need to comment out modules in httpd.conf.

    1. Restrict Directory Permissions

    Use minimal permissions and ownership for web directories. For example:

    sudo chown -R www-data:www-data /var/www/html
    sudo find /var/www/html -type d -exec chmod 750 {} \;
    sudo find /var/www/html -type f -exec chmod 640 {} \;
    
    1. Configure Apache Security Settings

    Edit your main config (often /etc/apache2/apache2.conf or /etc/httpd/conf/httpd.conf) and consider:

    # Hide server version details
    ServerSignature Off
    ServerTokens Prod
    
    # Limit request size to mitigate some DoS attacks
    LimitRequestBody 1048576
    
    # Disable directory listing
    <Directory /var/www/html>
        Options -Indexes
    </Directory>
    
    1. Enable TLS/SSL

    Secure traffic with HTTPS using Let’s Encrypt:

    sudo apt install certbot python3-certbot-apache
    sudo certbot --apache
    

    Certbot configures SSL automatically, but be sure to set strong ciphers and protocols. Example in ssl.conf:

    SSLProtocol all -SSLv3 -TLSv1 -TLSv1.1
    SSLCipherSuite HIGH:!aNULL:!MD5
    SSLHonorCipherOrder on
    
    1. Monitor Logs Regularly

    Automate log checks with tools like fail2ban, and inspect logs on the command line:

    tail -f /var/log/apache2/access.log /var/log/apache2/error.log
    

    Conclusion

    By applying these straightforward command-line techniques, you can lock down your Apache web server and help protect your web sites against common vulnerabilities. Stay proactive—monitor updates, prune what’s unnecessary, and automate where possible for a safer, more resilient hosting environment.