Revolutionizing Internship Evaluation: How We Transformed Bilkent University’s Approach (AWS & GitHub Actions & Spring)

Berke Gökmen
10 min readMar 6, 2024

--

Img. 1, Login Page of the System

A grateful note before we start;

We are thankful to the AWS Türkiye community and Levent Kanpolat who have supported the development and deployment of our project through generous cloud research credits.

Background Story

It all started off as a normal course project. Just like any Bilkent CS student, we were taking CS319, given by Professor Eray Tüzün.

During the first weeks of the course, groups are formed and then required to come up with a plan as to how the implementation will proceed and what aspects the application needs. The group we have formed was strong, it initially consisted of me and my two friends Mert Gençtürk and Erdem Eren Çağlar. Later, we met with another friend, Ahmet Alperen Yılmazyıldız and we were all set.

The topic is determined by the course coordinators and in our case, it was the internship evaluation procedure which was previously done manually through Google Drive, Moodle and email. The groups are required to go and talk with stakeholders in order to understand their needs and be able to implement them thoroughly.

Throughout the semester, we constantly strived to make the project better with latest technologies instead of just using Spring Boot and calling it day. On that endeavor, we tried to integrate latest cloud technologies into our project, which turned out to be AWS products such as S3 buckets, Elastic Beanstalk etc. since they seemed easy to set up. This also served my personal interest in learning cloud technologies at the time, what better time to learn something entirely new than at the middle of the semester am I right? However, there was a problem, since I was the one handling AWS side, I did not want to spend my own money on cloud computing. I was aware that AWS is already offering a very generous free-tier but I still did not want to end up spending money due to something wrong I did it in configurations. After we had a somewhat ready project which we could show to our professor, we asked him if he could help us get some cloud research credits so that we could work without worrying about what happens if something goes sideways. This turned out to be easier than anticipated and we were able to receive cloud research credits after a few email exchanges with Levent Kanpolat from AWS Türkiye. Now that we have received the credits, there was no going back. It was time to get to real work.

Initial Architecture

Img. 2, Initial Architecture

Since I was also working at Getir as software engineer at the time and I was trying my best in other courses, I did not have much time to do everything I wanted during the semester. Though I was able to implement them in the following summer with the continued support from our professor and my teammates, but more on that later.

Now going into the technical details, first step was to get our Spring Boot backend running on cloud, more specifically Elastic Beanstalk since it was extremely easy to set up besides a few bumps on the road.

AWS Elastic Beanstalk

First step was to create an environment on EB console. After that, I had a fully configured EC2 machine running on the cloud with sample code. Now, it was time to automatically test and push our own code whenever there a commit comes to the main branch of the project. For that purpose, I’ve decided to go with GitHub Actions since they were easy to set up and only required access and secret keys for accessing the EB Environment. Once that was configured, the actions YAML file looked as follows:

name: Deployment
on:
push:
branches:
- main
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v1
- name: Set up JDK
uses: actions/setup-java@v1
with:
java-version: '20'
- name: Build with Maven
run: mvn --batch-mode --update-snapshots verify
- name: Upload JAR
uses: actions/upload-artifact@v3
with:
name: artifact
path: target/internship-0.0.1.jar
deploy:
needs: build
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Download JAR
uses: actions/download-artifact@v3
with:
name: artifact
- name: Deploy to EB
uses: einaregilsson/beanstalk-deploy@v13
with:
aws_access_key: ${{ secrets.AWS_ACCESS_KEY }}
aws_secret_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
use_existing_version_if_available: false
application_name: app-name
environment_name: Env-name
version_label: ${{ github.SHA }}
region: eu-central-1
deployment_package: jar-name.jar

The pipeline was working flawlessly after a few trial and error attempts. At any commit into the main branch, the pipeline automatically gets triggered, runs unit tests (more on that later) and then builds the JAR package. If everything goes well until that point, it goes on to the deployment stage where it pushes the new version into the EB environment.

Amazon S3

With the actual backend deployment out of the way, we needed a way to store all the files (more specifically the internship reports). For that, the easiest choice was using S3 buckets. Setting S3 buckets in Java Spring Boot is as easy as follows:

@Configuration
public class S3Config {
@Value("${spring.data.s3.accesskey}")
private String accessKeyId;

@Value("${spring.data.s3.secretkey}")
private String accessKeySecret;

@Value("${spring.data.s3.region}")
private String s3RegionName;

@Bean
public AmazonS3 getAmazonS3Client() {
final BasicAWSCredentials basicAWSCredentials = new BasicAWSCredentials(accessKeyId, accessKeySecret);
return AmazonS3ClientBuilder
.standard()
.withCredentials(new AWSStaticCredentialsProvider(basicAWSCredentials))
.withRegion(s3RegionName)
.build();
}
}

After this is configured, we could easily access the file API provided by Amazon’s S3 package and upload/download files from S3 bucket easily.

Amazon S3 for Static Website Hosting (Frontend)

S3 service is diverse. This means we could have just used it for static website hosting, which is indeed what we did.

Our Frontend app is developed with React, which means when we build the production version, we simply get a static website code that we can put on S3 through Github Actions again. S3 service allows you to host a bucket as a static website and generates a link which directly renders the React app we have built. This is extremely convenient, if only I had known this sooner… Here is an example GitHub Actions configuration for static website deployment with React to S3 Buckets.

# The code for building & deploying frontend to S3 bucket (Github Actions)

name: s3-depl

on:
push:
branches: [ main ]

jobs:
build:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v3
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: eu-central-1
- name: Build React App
run: npm install && npm run build
- name: Deploy app build to S3 bucket bilkent domain
run: aws s3 sync ./build/ s3://bucket-name --delete

Now comes the question, Why not use AWS Amplify, since it is litreally designed for this purpose?? The reason is simple. While developing the project, we did not have any domains, neither from Bilkent University nor our own. Thus, we could not obtain any SSL certificates and this is where the problem is. AWS Amplify generates a link with HTTPS://, which when used to send requests to a backend server with HTTP://, raises some problems due to security reasons. Later, we obtained SSL certificates and domain names, but out Frontend kept living in our little S3 bucket.

Redis (Amazon ElastiCache)

Even though the usage of an ElastiCache instance is not vital in our “simple” application, it’s still good to try out relatively novel technologies.

I’ve used ElastiCache not for the purpose of caching as usual. Rather, it’s used to store password-reset-tokens and JWT (Json Web Token) blacklist which are short lived by definition.

For the password reset procedure, the system needs to generate a token which should expire after some point and should not be reusable. Using JWTs for this purpose is straightforward but there is no way to invalidate them without explicitly blacklisting them in some kind of storage system. Thus, the system generates a UUID (Universal Unique Identifier) and stores it in redis with TTL (time to live) property set to 10 minutes and value for the email that it was requested for. After 10 minutes, if the reset code is not used, it automatically vanishes from the Redis, otherwise it gets deleted when it’s used to reset a password. This procedure enables the system to have a secure password reset flow.

String code = UUID.randomUUID().toString();
String key = "forgot:" + code;

try {
cacheService.addString(key, email);
log.info("Added forgot key -> " + key);
mailService.sendForgotPasswordEmail(email, code); // Async
} catch (Exception exception) {
log.error("Could not add forgot key -> " + key);
}

For the JWT, the cache is used as a blacklist. As I’ve explained above, there is no way to invalidate JWTs other than their own expireAt property. Thus, when someone logs out from their account(s), even though the JWT token is removed from their browser’s local storage, it’s still valid and can be used if maliciously stolen before. Thus, whenever a logout request hits the system, the token(s) gets put into the blacklist cache with a TTL property of 1 hour, the same value as lifespan of JWT itself. This ensures that JWT tokens are not used after a logout procedure.

if (cacheService.containsKey("blacklist:" + accessToken))
throw new TokenRuntimeException("Unauthorized", ErrorCodes.JWT_BLACKLIST, HttpStatus.UNAUTHORIZED);

MongoDB

MongoDB is the only service that is not from AWS. In order to use MongoDB’s free tier (500MB) for development purposes, I’ve created a database through MongoDB Cloud. Our MongoDB Cloud database actually uses AWS underneath and could theoretically be integrated into our VPC (Virtual Private Cloud) easily, though I’d have to pay for a more premium tier at MongoDB Cloud.

Story, continued…

Now comes the last part of the semester, the presentations. Every group goes on to the stage (in our semester) one by one to present their project and answer questions from both other students and our Professor himself. After we have presented our work and left the room when all the presentations have finished, we had a great talk with our Professor but we did not yet know that we were chosen as the best project. It was not until we have received a message from our Professor saying that “you can write that your project were chosen as the best project to your CVs”. Then we were asked if we wanted to continue developing the project for the actual use of Bilkent University. Then, another part of the adventure began during summer.

Final Architecture

Not much had to change from the initial architecture, but I had to integrate some sort of auto-scaling into the EB instances as well as SSL certificates for both frontend and backend.

Img. 3, Final Architecture

As you can see from the picture, I’ve added Amazon Elastic Load Balancer on top of EB instances. Moreover, a CloudFront distribution is used to serve our static frontend better. Let’s go through them one by one.

CloudFront

Since our website was hosted with S3 static website hosting, to put a SSL certificate I had to place the S3 bucket under a CloudFront distribution. This turned out to be great choice since now we could see where the traffic is coming from into our frontend and also I could integrate Amazon issued SSL certificate easily (more on that later).

Img. 4, A peek into the cloudfront dashboard

Elastic LoadBalancer

There is not much to explain since this is fully managed and all I had to do was to click on the box of “auto-scaling” and I was all set. However, this is not the only reason I had to integrate a load balancer, the other was to be able to use SSL certificates on our custom domains.

Amazon Certificate Manager

By far one of the easiest services on AWS platform I had to deal with. After we have obtained the URLs of or frontend and backend application from Computer Department of Bilkent University, the rest was extremely easy, we just had to configure some CNAME records and that was it. With a few clicks, I had SSL certificates ready for both frontend and backend and could easily apply them onto my existing services, cloudfront and elasticbeanstalk.

Additional Steps to Get Ready for Production (Test Containers)

One more thing that we did as a team was to ensure the system remained sound even after we needed to push some fixes. For that, we used unit tests, many unit tests. I’ve also integrated them into our backend pipeline and they run automatically before the application gets built.

Since our tests are really comprehensive and not just limited to basic logic operations, we also had to test our database, the cache etc. Now, this can be done easily on our local machine if we just run some containers on Docker however doing this on an automated pipeline requires a little bit more effort. For that, I’ve configured Test Containers. A simple implementation for MongoDB container is shown below.

@Configuration
@Testcontainers
public class MongoTestConfiguration {

private static final int MONGO_PORT = 27017;

@Container
private static final MongoDBContainer mongoDBContainer = new MongoDBContainer("mongo:5.0.21")
.withExposedPorts(MONGO_PORT)
.withReuse(true); // Without this each test will run it's own container.

static {
mongoDBContainer.start();
}

@Bean
public MongoTemplate mongoTemplate() {
String connectionString = mongoDBContainer.getReplicaSetUrl();
return new MongoTemplate(new SimpleMongoClientDatabaseFactory(connectionString));
}
}

Conclusion

This was overall a great experience. Going from a simple course project to an actual system used by more than 500+ people at our own university is just incredible. I truly feel like me and my teammates made a contribution that will be long-lived in our university.

Lastly, I want to thank Prof. Eray Tüzün for believing in us, and Levent Kanpolat for supporting us with cloud research credits.

For any questions, you can reach out to me through LinkedIn, or leave them in the comments. Moreover, please let me know if there is any part you’d want me to write a detailed article about. Thanks for reading!

Link: https://fe-ims.bilkent.edu.tr/. You can see the whole team by clicking about us.

--

--