How I went head to head with GitHub Copilot

Janez Cergolj

Who triumphed in my face-off with GitHub Copilot?

In my recent coding adventure, I delved into the #laravel macros and decided to put GitHub Copilot to the test. I wrote my version of a unit test class, but soon remembered that Copilot offers an option to generate tests too. I decided to put it to the test.

Let's first look at the HTTP macro:

<?php

namespace App;

use Illuminate\Support\Facades\Http;

class BrevoMacro
{
    public function presetting(): callable
    {
        return function () {
            return Http::withHeaders(
                [
                    'api-key' => config('brevo-webhook-manager.brevo.api_key'),
                    'accept' => 'application/json',
                    'content-type' => 'application/json',
                ]
            )->withUserAgent(config('brevo-webhook-manager.api_user_agent'))
                ->baseUrl(config('brevo-webhook-manager.brevo.base_url'));
        };
    }
}

My version

and here is my version of the test class:

<?php

namespace Tests\Unit;

use Illuminate\Support\Facades\Http;
use Tests\TestCase;
use ReflectionClass;

class BrevoMacroTest extends TestCase
{
    /** @var PendingRequest */
    public $pendingRequest;

    public function setUp(): void
    {
        parent::setUp();

        Http::preventStrayRequests();

        $this->pendingRequest = Http::presetting()->dump();
    }

    /** @test */
    function assert_http_client_has_presetting_method()
    {
        $this->assertTrue(Http::hasMacro('presetting'));
    }

    /** @test */
    public function assert_api_key_header()
    {
        $this->assertSame(config('brevo-webhook-manager.brevo.api_key'), $this->pendingRequest->getOptions()['headers']['api-key']);
    }

    /** @test */
    public function assert_accept_header_is_set()
    {
        $this->assertSame('application/json', $this->pendingRequest->getOptions()['headers']['accept']);
    }

    /** @test */
    public function assert_content_type_header_is_set()
    {
        $this->assertSame('application/json', $this->pendingRequest->getOptions()['headers']['content-type']);
    }

    /** @test */
    public function assert_user_agent_header_is_set()
    {
        $this->assertSame(config('brevo-webhook-manager.api_user_agent'), $this->pendingRequest->getOptions()['headers']['User-Agent']);
    }

    /** @test */
    public function assert_base_url_is_set()
    {
        $pendingRequest = Http::presetting()->dump();

        $reflectionClass = new ReflectionClass($pendingRequest);
        $baseUrl = $reflectionClass->getProperty('baseUrl');

        $baseUrl->setAccessible(true);

        $this->assertSame(config('brevo-webhook-manager.brevo.base_url'), $baseUrl->getValue($pendingRequest));
    }
}

What I like about this version is that the tests are split into small chunks, manageable to understand and digest. I don't like the last test, where I have to use RefplectionClass. It doesn't look pleasant.

GitHub Copilot

And here's Copilot's take:

<?php

namespace Tests\Unit;

use Illuminate\Http\Client\Request;
use Illuminate\Support\Facades\Http;
use Illuminate\Support\Facades\Config;
use Tests\TestCase;
use App\BrevoMacro;

class BrevoMacroTest extends TestCase
{
    public function testPresetting()
    {
        $brevoMacro = new BrevoMacro();
        $presetting = $brevoMacro->presetting();

        $httpMock = Http::fake([
            '*/test' => Http::response(['success' => true], 200)
        ]);

        Config::set('brevo-webhook-manager.brevo.api_key', 'test-api-key');
        Config::set('brevo-webhook-manager.api_user_agent', 'test-user-agent');
        Config::set('brevo-webhook-manager.brevo.base_url', 'https://example.com');

        $response = $presetting()->get('/test');

        $httpMock->assertSent(function ($request) {
            return $request->header('api-key')[0] === 'test-api-key' &&
                $request->header('accept')[0] === 'application/json' &&
                $request->header('content-type')[0] === 'application/json' &&
                $request->header('User-Agent')[0] === 'test-user-agent' &&
                $request->url() === 'https://example.com/test';
        });

        $this->assertEquals(['success' => true], $response->json());
    }
}

At first, Copilot's code didn't work. After some investigation, I discovered the issue. The headers in Copilot's code are arrays, not strings. Once I adjusted this, everything fell into place. And the tests passes.

It is interesting how AI took a completely different approach. It is Laravel agnostic one. Furthermore, it puts everything in one test. I don't mind that. I like that it uses specific config variables only for testing. However, I'm on the fence regarding arranging part of the test. We are using way too much Laravel magic and additional classes for unit tests. On the other hand, the arranging part should be relatively easy to understand, and it utilises nicely what Laravel has to offer.

More Laravely way

To round things off, I rewrote AI test to look even more in line with Laravel.

<?php
use Illuminate\Support\Facades\Config;
use Illuminate\Support\Facades\Http;
use Tests\TestCase;

class BrevoMacroTest extends TestCase
{
    public function testPresetting()
    {
        Config::set('brevo-webhook-manager.brevo.api_key', 'test-api-key');
        Config::set('brevo-webhook-manager.api_user_agent', 'test-user-agent');
        Config::set('brevo-webhook-manager.brevo.base_url', 'https://example.com');

        Http::fake([
            '*/test' => Http::response(['success' => true], 200)
        ]);

        $response = Http::presetting()->get('/test');

        Http::assertSent(function ($request) {
            return $request->header('api-key')[0] === 'test-api-key' &&
                $request->header('accept')[0] === 'application/json' &&
                $request->header('content-type')[0] === 'application/json' &&
                $request->header('user-agent')[0] === 'test-user-agent' &&
                $request->url() === 'https://example.com/test';
        });

        $this->assertEquals(['success' => true], $response->json());
    }
}
?>

Final Remarks

As I look back on this experience, it's truly impressive how initial versions of AI can yield such outstanding outcomes. Of course, it requires some supervision, and not every suggestion is a perfect fit, but it serves as an incredible mate in pair programming, aiding in brainstorming and refining ideas. I'm excited to witness its evolution in the coming months and years.

Now, over to you! Which version do you lean towards?