[{"content":"","date":"4 May 2026","externalUrl":null,"permalink":"/Portfolio/tags/3sem/","section":"Tags","summary":"","title":"3Sem","type":"tags"},{"content":"","date":"4 May 2026","externalUrl":null,"permalink":"/Portfolio/tags/devlog/","section":"Tags","summary":"","title":"Devlog","type":"tags"},{"content":"","date":"4 May 2026","externalUrl":null,"permalink":"/Portfolio/devlog/","section":"Devlogs","summary":"","title":"Devlogs","type":"devlog"},{"content":"","date":"4 May 2026","externalUrl":null,"permalink":"/Portfolio/tags/frontend/","section":"Tags","summary":"","title":"Frontend","type":"tags"},{"content":"","date":"4 May 2026","externalUrl":null,"permalink":"/Portfolio/","section":"Jesper Andersen - Blog \u0026 DevLog","summary":"","title":"Jesper Andersen - Blog \u0026 DevLog","type":"page"},{"content":"","date":"4 May 2026","externalUrl":null,"permalink":"/Portfolio/series/maintenance-log/","section":"Series","summary":"","title":"Maintenance Log","type":"series"},{"content":" Devlog Week 9: React Frontend Kickoff (Vite, Routing, Layout) # This week starts the frontend phase of the Maintenance Log project. The focus was not new domain functionality yet, but establishing a clean React baseline: routing, layout composition, reusable UI building blocks, and a consistent styling foundation.\nWhat Changed This Week # Set up a React app using Vite and removed starter template code Added declarative routing with React Router and a nested layout structure Established a domain-based folder structure (components/ + pages/) Defined a small design system via CSS custom properties (tokens) Implemented shared UI components (InputField, Button) Added a layout shell (NavBar + drawer menu) using state + conditional rendering Added initial pages and placeholders for upcoming features Project Setup # Created the project with Vite Installed React Router Wrapped the app in BrowserRouter in main.jsx Folder Structure # The goal was to keep features grouped by domain and co-locate styles.\nsrc/pages/ for route-level components src/components/ grouped by domain: auth/, layout/, assets/, shared/ Component-specific CSS files live next to the component that uses them Routing (App.jsx) # Routing is centralized in App.jsx.\nPublic route: /login Authenticated area as a nested layout route: / renders Layout Child routes render inside \u0026lt;Outlet /\u0026gt; Dynamic segments: :id for asset/user details The routing structure is intentionally minimal at this stage, but it establishes the pattern for adding the real pages later.\nLayout Shell (Layout + NavBar + DrawerMenu) # The app shell is implemented as a reusable layout route.\nLayout owns drawer open/close state toggleMenu and closeMenu are passed down as props NavBar triggers onToggle from the burger menu DrawerMenu renders conditionally (isMenuOpen \u0026amp;\u0026amp; ...) NavLink items close the drawer on click Logout button is a placeholder Design System (index.css + App.css) # Styling is token-based to keep UI consistent and easy to adjust.\nCSS custom properties for: Colors, typography, spacing, radius, shadows, semantic status colors Mobile-first layout \u0026ldquo;Phone frame\u0026rdquo; shell on larger screens Media query switches to full-screen on real mobile devices Shared Components # InputField Controlled input via props Optional label via ternary rendering Props: type, placeholder, required, value, onChange Button Reusable button component Accepts onClick handler and className Auth Components # LoginForm Owns email and password state via useState Uses \u0026lt;form onSubmit={...}\u0026gt; and e.preventDefault() Currently logs credentials as a placeholder (API integration comes later) Pages # Login: branding wrapper + LoginForm AssetList: holds mock assets in state and renders an AssetCard per asset AssetDetail, EmployeeList, UserProfile: placeholders for upcoming implementation Asset Components # AssetCard Receives an asset prop Shows key fields (name, description, active status, last log date) Uses ternary rendering for boolean status Uses useNavigate to route to /assets/:id/logs on click Key React Concepts Practiced # Props + prop drilling Controlled inputs with useState Conditional rendering (\u0026amp;\u0026amp; and ternary) Component composition and reuse Lifting state to the lowest common ancestor Route patterns: nested routes + \u0026lt;Outlet /\u0026gt; NavLink vs Link (active styling) useNavigate for programmatic navigation Component Hierarchy # App ├── Login │ └── LoginForm │ ├── InputField (email) │ ├── InputField (password) │ └── Button (login) └── Layout ├── NavBar ├── DrawerMenu │ ├── NavLink (Home) │ ├── NavLink (Assets) │ ├── NavLink (Your Profile) │ ├── NavLink (User List) │ ├── NavLink (Manage Assets) │ ├── NavLink (Manage Users) │ └── Button (Log Out) └── \u0026lt;Outlet\u0026gt; ├── AssetList │ └── AssetCard (× n) ├── AssetDetail ├── EmployeeList └── UserProfile Frontend Screenshots # Login page (v1) Drawer menu (v1) Asset list (v1) ","date":"4 May 2026","externalUrl":null,"permalink":"/Portfolio/devlog/maintenancelog-ninthweek/","section":"Devlogs","summary":"Devlog Week 9: React Frontend Kickoff (Vite, Routing, Layout) # This week starts the frontend phase of the Maintenance Log project. The focus was not new domain functionality yet, but establishing a clean React baseline: routing, layout composition, reusable UI building blocks, and a consistent styling foundation.\n","title":"Maintenance Log - Ninth Week: React Frontend Kickoff (Vite, Routing, Layout)","type":"devlog"},{"content":"","date":"4 May 2026","externalUrl":null,"permalink":"/Portfolio/tags/maintenancelog/","section":"Tags","summary":"","title":"MaintenanceLog","type":"tags"},{"content":"","date":"4 May 2026","externalUrl":null,"permalink":"/Portfolio/tags/project/","section":"Tags","summary":"","title":"Project","type":"tags"},{"content":"","date":"4 May 2026","externalUrl":null,"permalink":"/Portfolio/tags/react/","section":"Tags","summary":"","title":"React","type":"tags"},{"content":"","date":"4 May 2026","externalUrl":null,"permalink":"/Portfolio/tags/reactrouter/","section":"Tags","summary":"","title":"ReactRouter","type":"tags"},{"content":"","date":"4 May 2026","externalUrl":null,"permalink":"/Portfolio/series/","section":"Series","summary":"","title":"Series","type":"series"},{"content":"","date":"4 May 2026","externalUrl":null,"permalink":"/Portfolio/tags/","section":"Tags","summary":"","title":"Tags","type":"tags"},{"content":"","date":"4 May 2026","externalUrl":null,"permalink":"/Portfolio/tags/vite/","section":"Tags","summary":"","title":"Vite","type":"tags"},{"content":"","date":"9 April 2026","externalUrl":null,"permalink":"/Portfolio/tags/backend/","section":"Tags","summary":"","title":"Backend","type":"tags"},{"content":"","date":"9 April 2026","externalUrl":null,"permalink":"/Portfolio/tags/integration/","section":"Tags","summary":"","title":"Integration","type":"tags"},{"content":" Devlog Week 8: Final Backend Hardening \u0026amp; Reflection # This is my last backend-focused post before I shift to a React frontend. This week wasn’t about new domain features — it was about hardening what I already built: tightening a security flaw, centralizing validation, and cleaning up the external API seeding integration.\nWhat Changed This Week # Server-side employee attribution: Maintenance logs now derive performedByEmployeeId from JWT token (prevents impersonation) Auth identity helper: Removed repeated “get auth user → resolve employeeId” logic from controllers Password change endpoint: Implemented a secure PATCH /employees/{id}/password flow Password util extraction: Moved BCrypt hashing/verification into a dedicated PasswordUtil Centralized validation utility: All format/length validation in one place Two-layer validation: Controllers check required fields, services enforce format rules Integration seeding cleanup: Refactored the RandomUser seeding code and separated “demo benchmarking” from actual seeding Preventing Employee Impersonation # The Security Flaw # When a technician creates a maintenance log, the system needs to record who performed the work. Originally, the client sent performedByEmployeeId in the request body.\nThe problem: Any authenticated user could claim the work was done by someone else.\nExample malicious request:\nPOST /api/v1/assets/1/logs Authorization: Bearer \u0026lt;token\u0026gt; { \u0026#34;performedDate\u0026#34;: \u0026#34;2026-04-09T10:00:00\u0026#34;, \u0026#34;status\u0026#34;: \u0026#34;DONE\u0026#34;, \u0026#34;taskType\u0026#34;: \u0026#34;MAINTENANCE\u0026#34;, \u0026#34;comment\u0026#34;: \u0026#34;Routine check\u0026#34;, \u0026#34;performedByEmployeeId\u0026#34;: 5 ← Technician claims manager did the work } This breaks audit trail integrity. You can\u0026rsquo;t trust who actually performed maintenance.\nThe Fix: Server-Side Attribution # The employee who performed the work is now derived from the JWT token, not the request body.\nUpdated request (client can\u0026rsquo;t specify performer):\nPOST /api/v1/assets/1/logs Authorization: Bearer \u0026lt;token\u0026gt; { \u0026#34;performedDate\u0026#34;: \u0026#34;2026-04-09T10:00:00\u0026#34;, \u0026#34;status\u0026#34;: \u0026#34;DONE\u0026#34;, \u0026#34;taskType\u0026#34;: \u0026#34;MAINTENANCE\u0026#34;, \u0026#34;comment\u0026#34;: \u0026#34;Routine check\u0026#34; } Controller extracts performer from token:\npublic void createLogForAsset(Context ctx) { // Get authenticated user from JWT token UserDTO tokenUser = ctx.attribute(\u0026#34;authUser\u0026#34;); if (tokenUser == null) { throw new ApiException(401, \u0026#34;Missing authenticated employee\u0026#34;); } // Resolve employee ID from email String authEmail = tokenUser.getUsername(); Integer performedById = employeeIdentityService.getEmployeeIdByEmail(authEmail); if (performedById == null) { throw new ApiException(401, \u0026#34;Missing authenticated employee ID\u0026#34;); } // Validate request body (no performedByEmployeeId field) CreateLogRequest body = ctx.bodyValidator(CreateLogRequest.class) .check(dto -\u0026gt; dto.performedDate() != null, \u0026#34;Performed date required\u0026#34;) .check(dto -\u0026gt; dto.status() != null, \u0026#34;Status required\u0026#34;) .check(dto -\u0026gt; dto.taskType() != null, \u0026#34;Task type required\u0026#34;) .check(dto -\u0026gt; dto.comment() != null, \u0026#34;Comment required\u0026#34;) .get(); // Build request with server-controlled employee ID CreateLogRequest request = new CreateLogRequest( body.performedDate(), body.status(), body.taskType(), body.comment(), performedById ← Server sets this ); ctx.status(201).json(logService.create(assetId, request)); } Result: Logged work always attributed to the authenticated user. No way to impersonate.\nISP: EmployeeIdentityService # The controller needs to resolve an email to an employee ID. It shouldn\u0026rsquo;t depend on the full EmployeeService or touch DAOs directly.\nNew minimal interface:\npublic interface EmployeeIdentityService { Integer getEmployeeIdByEmail(String email); } Implementation:\npublic class EmployeeIdentityServiceImpl implements EmployeeIdentityService { private final IEmployeeEmailQuery employeeDao; @Override public Integer getEmployeeIdByEmail(String email) { Employee employee = employeeDao.getByEmail(email); return employee == null ? null : employee.getEmployeeId(); } } Why this matters: The controller depends only on what it needs (Interface Segregation Principle). Easy to test, easy to mock, no coupling to full employee service.\nAuth Helper + Password Changes # Once I started pulling identity from the token in multiple places (maintenance logs, employee operations, etc.), I noticed the same boilerplate creeping into controllers:\nRead authenticated user from ctx.attribute(\u0026quot;authUser\u0026quot;) Extract the username/email Resolve employee ID from the database Return 401 if anything is missing To keep controllers focused on request/response logic, I extracted that pattern into a small helper (EmployeeAuthUtil). It returns a simple value object (authenticated library user + resolved employee ID), and controllers can just call a single “require auth” method instead of repeating the same guards.\nI also kept ISP intact by leaning on a minimal identity contract (EmployeeIdentityService) so controllers don’t have to depend on an entire service API just to look up an ID. In practice, EmployeeService extends EmployeeIdentityService, so controllers can accept the minimal contract even when they’re passed the full service.\nThe controller call ends up as a single line:\nInteger employeeId = EmployeeAuthUtil.requireAuthenticatedEmployee(ctx, employeeService).id(); This is the same pattern I now use when deriving performedByEmployeeId for maintenance logs, instead of duplicating token extraction and email → employee ID lookups.\nPassword Change Endpoint (Secured) # I added a password change route:\nPATCH /employees/{id}/password (role: AUTHENTICATED) Key checks:\nRequires both oldPassword and newPassword in the JSON body (non-null / non-blank) Resolves the authenticated employee from the token via EmployeeAuthUtil Enforces that the path ID matches the authenticated employee ID (otherwise 403) Verifies the old password against the stored hash before updating Validates the new password using the same centralized rules (ValidationUtil.validatePasswordNonNull(newPassword)) This makes it hard to accidentally expose a “change anyone’s password” endpoint, and it forces a re-auth style proof (old password) before persisting the new one.\nPassword Crypto Refactor (PasswordUtil) # Previously, BCrypt hashing/verification lived in SecurityServiceImpl. That worked, but it created awkward coupling:\nSeeding, DAOs, and tests were calling security-layer helpers So I extracted password hashing and verification into a dedicated utility (PasswordUtil) and removed the helper methods from SecurityServiceImpl. The side effect is that persistence-layer code can verify passwords without depending on the security service at all.\nI also updated the remaining callers (seeding, DAO verification, and test populators) to use PasswordUtil so there’s a single place for the hashing rules.\nSanity check: After migrating the call sites, the Maven test suite still passed.\nCentralized Validation # Previously, validation logic was scattered: some in controllers (Javalin bodyValidator), some in services (ad-hoc checks), inconsistent error messages.\nThe ValidationUtil # All format/length validation now lives in one utility class:\npublic final class ValidationUtil { // Precompiled regex (performance optimization) private static final Pattern EMAIL_PATTERN = Pattern.compile(\u0026#34;^[^@\\\\s]+@[^@\\\\s]+\\\\.[^@\\\\s]+$\u0026#34;); private static final Pattern PHONE_PATTERN = Pattern.compile(\u0026#34;^[0-9+()\\\\s-]{6,20}$\u0026#34;); public static void lengthBetween(String value, String fieldName, int min, int max) { if (value == null) return; int length = value.trim().length(); if (length \u0026lt; min || length \u0026gt; max) { throw new ApiException(400, String.format(\u0026#34;%s must be between %d and %d characters\u0026#34;, fieldName, min, max)); } } public static void validateEmailNonNull(String email) { matches(email, EMAIL_PATTERN, \u0026#34;Invalid email format\u0026#34;); if (email.trim().length() \u0026gt; 254) { // RFC 5321 limit throw new ApiException(400, \u0026#34;Email is too long\u0026#34;); } } public static void validatePhoneNonNull(String phone) { matches(phone, PHONE_PATTERN, \u0026#34;Invalid phone format\u0026#34;); } public static void validatePasswordNonNull(String password) { lengthBetween(password, \u0026#34;Password\u0026#34;, 4, 72); // BCrypt limit } } Validation rules:\nEmail: Regex + max 254 chars (RFC standard) Phone: 6-20 chars, allows +, (), spaces, - Password: 4-72 chars (BCrypt input limit) Names: 2-50 chars All throw ApiException(400, message) for consistent error handling.\nTwo-Layer Validation Pattern # Validation happens at two levels with different responsibilities:\nLayer 1: Controller - Required Fields\nControllers reject requests missing required fields:\npublic void register(Context ctx) { CreateEmployeeRequest request = ctx.bodyValidator(CreateEmployeeRequest.class) .check(dto -\u0026gt; dto.firstName() != null \u0026amp;\u0026amp; !dto.firstName().trim().isEmpty(), \u0026#34;First name is required\u0026#34;) .check(dto -\u0026gt; dto.lastName() != null \u0026amp;\u0026amp; !dto.lastName().trim().isEmpty(), \u0026#34;Last name is required\u0026#34;) .check(dto -\u0026gt; dto.email() != null \u0026amp;\u0026amp; !dto.email().trim().isEmpty(), \u0026#34;Email is required\u0026#34;) .check(dto -\u0026gt; dto.password() != null \u0026amp;\u0026amp; !dto.password().trim().isEmpty(), \u0026#34;Password is required\u0026#34;) .check(dto -\u0026gt; dto.phone() != null \u0026amp;\u0026amp; !dto.phone().trim().isEmpty(), \u0026#34;Phone is required\u0026#34;) .check(dto -\u0026gt; dto.role() != null, \u0026#34;Role is required\u0026#34;) .get(); ctx.status(201).json(securityService.register(request)); } Why at controller level? Fast rejection. If firstName is missing, don\u0026rsquo;t waste time on service-layer validation.\nLayer 2: Service - Format \u0026amp; Business Rules\nServices enforce format constraints, length limits, and business rules:\n@Override public EmployeeDTO register(CreateEmployeeRequest request) { // Format validation ValidationUtil.lengthBetween(request.firstName(), \u0026#34;First name\u0026#34;, 2, 50); ValidationUtil.lengthBetween(request.lastName(), \u0026#34;Last name\u0026#34;, 2, 50); ValidationUtil.validateEmailNonNull(request.email()); ValidationUtil.validatePhoneNonNull(request.phone()); ValidationUtil.validatePasswordNonNull(request.password()); // Business rule: email uniqueness if (secDAO.getByEmail(request.email()) != null) { throw new ApiException(409, \u0026#34;Email already exists\u0026#34;); } // Sanitize input (trim whitespace) Employee employee = Employee.builder() .firstName(request.firstName().trim()) .lastName(request.lastName().trim()) .email(request.email().trim()) .phone(request.phone().trim()) .role(request.role()) .password(hashPassword(request.password())) .active(true) .build(); Employee created = secDAO.create(employee); return EmployeeMapper.toDTO(created); } Why at service level? Consistent enforcement across all entry points (register, update, seeding). Reusable validation logic.\nWhy Two Layers? # Layer Checks Rationale Controller Required fields present Early rejection, don\u0026rsquo;t waste service time Service Format, length, business rules Consistent enforcement, reusable logic Both throw ApiException(400), so error responses are uniform.\nIntegration Seeding Cleanup # I originally built the RandomUser seeding integration earlier in the project (week 3). This week I revisited it because it had grown into a mix of “real seeding” and “benchmark/demo” code.\nWhat I changed:\nSeparated benchmarking from actual seeding, so it doesn’t run every time by default Tightened up the multithreaded fetch to be safer (caps on threads, guaranteed shutdown) Made failures per-batch instead of failing the entire seed run Kept the fixed seed endpoint for deterministic demo/test data The main win here wasn’t “more concurrency” — it was making the seeding tool predictable, maintainable, and less intrusive while I’m focusing on the core API.\nReflection: What Went Well # Architecture choices paid off:\nLayered architecture made it easy to add security without rewriting controllers ISP in DAOs meant services only depend on what they need DTO mappers kept entities separate from API contracts Security was added cleanly:\nJWT library isolated in security layer via conversion methods beforeMatched/afterMatched split makes sense once you understand Javalin\u0026rsquo;s lifecycle Role hierarchy without multiple database roles keeps the data model simple Testing strategy worked:\nRestAssured + Testcontainers caught bugs early Separate test ports prevented port conflicts Seeded data with known passwords made auth tests straightforward Reflection: What Was Hard # JWT library integration:\nHaving two different UserDTO types was confusing until I renamed domain entities to Employee Understanding when to use beforeMatched vs afterMatched took some trial and error Validation duplication:\nInitially had validation scattered across controllers and services Centralizing in ValidationUtil should have happened earlier Testing with authentication:\nEvery test breaking when I added security was tedious Should have built security earlier to avoid mass test updates Deployment setup:\nDatabase table creation strategy (Hibernate update vs migrations) still feels hacky Manual admin seeding is functional but not elegant What I\u0026rsquo;d Do Differently # 1. Centralize validation sooner\nValidation logic duplicated across services before I built ValidationUtil. Should have been a week 2 task.\n2. Add input sanitization earlier\nTrimming strings before persistence should have been part of the initial validation strategy, not added later.\nWhat\u0026rsquo;s Next: Frontend in React # The backend is functionally complete:\nFull CRUD for employees, assets, maintenance logs JWT authentication with role-based authorization Centralized validation ISP-compliant DAO layer Comprehensive test coverage External API integration for seeding Next up: Building a React frontend that consumes this API.\nPlanned features:\nLogin page with JWT token management Employee dashboard (role-dependent views) Asset list with search/filter Maintenance log creation form Manager analytics (logs by employee, status distribution) Technical stack:\nReact (JavaScript) React Router for navigation Fetch API for HTTP requests Context API for auth state The backend is done for now. Time to make it usable.\n","date":"9 April 2026","externalUrl":null,"permalink":"/Portfolio/devlog/maintenancelog-eightweek/","section":"Devlogs","summary":"Devlog Week 8: Final Backend Hardening \u0026 Reflection # This is my last backend-focused post before I shift to a React frontend. This week wasn’t about new domain features — it was about hardening what I already built: tightening a security flaw, centralizing validation, and cleaning up the external API seeding integration.\n","title":"Maintenance Log - Final Backend Week","type":"devlog"},{"content":"","date":"9 April 2026","externalUrl":null,"permalink":"/Portfolio/tags/security/","section":"Tags","summary":"","title":"Security","type":"tags"},{"content":"","date":"9 April 2026","externalUrl":null,"permalink":"/Portfolio/tags/validation/","section":"Tags","summary":"","title":"Validation","type":"tags"},{"content":"","date":"29 March 2026","externalUrl":null,"permalink":"/Portfolio/tags/ci/cd/","section":"Tags","summary":"","title":"CI/CD","type":"tags"},{"content":"","date":"29 March 2026","externalUrl":null,"permalink":"/Portfolio/tags/deployment/","section":"Tags","summary":"","title":"Deployment","type":"tags"},{"content":"","date":"29 March 2026","externalUrl":null,"permalink":"/Portfolio/tags/devops/","section":"Tags","summary":"","title":"DevOps","type":"tags"},{"content":"","date":"29 March 2026","externalUrl":null,"permalink":"/Portfolio/tags/docker/","section":"Tags","summary":"","title":"Docker","type":"tags"},{"content":"","date":"29 March 2026","externalUrl":null,"permalink":"/Portfolio/tags/githubactions/","section":"Tags","summary":"","title":"GitHubActions","type":"tags"},{"content":" Devlog Week 7: Deployment, CI/CD \u0026amp; Running It For Real # This week was all about getting the project out of my IDE and onto an actual server. The goal wasn’t to add new domain features — it was to make the API build, ship, and update automatically.\nDeployed API: https://maintenancelog.heltsort.dk/\nRight now the root endpoint just returns a small JSON welcome message: {\u0026quot;message\u0026quot;:\u0026quot;Welcome to the Maintenance Log!\u0026quot;}\nIf you hit /routes you can see an overview of the available endpoints and what roles are required.\nWhat Changed This Week # CI pipeline: GitHub Actions builds the project with Maven and only continues if the tests pass Docker image: the build produces a Docker image and pushes it to Docker Hub Server setup: DigitalOcean droplet runs the API via docker-compose Auto-updates: Watchtower pulls new images and restarts the container automatically HTTPS + domain: Caddy reverse proxies the API behind a real domain + TLS The Big Picture: From Push → Running Server # My deployment pipeline ended up being roughly:\nI push to main GitHub Actions runs mvn package (tests included) If that succeeds, it builds a Docker image from my Dockerfile The image is pushed to Docker Hub (jespertaxicon/maintenancelog:latest) My droplet pulls/runs the image using docker-compose Watchtower monitors the image and updates the running container when :latest changes Caddy sits in front and serves the API over HTTPS on my domain This matches the “full pipeline” setup we used in the Deployment \u0026amp; DevOps week (Docker + GitHub Actions + Docker Hub + DigitalOcean + Watchtower + Caddy).\nGitHub Actions: Build, Test, Then Push # I used a workflow based on the course setup. The important part (for me) was that deployment is gated by tests — if my tests fail, nothing gets pushed.\nMy workflow does:\ncheckout setup JDK 17 mvn --batch-mode --update-snapshots package login to Docker Hub build + push the Docker image I also had to provide environment variables at build time. The fix was adding the JWT-related values as GitHub Secrets and injecting them into the Maven build step (so the build/test phase in CI has the same inputs as my local setup):\nISSUER SECRET_KEY TOKEN_EXPIRE_TIME Docker Compose: API + Caddy + Watchtower # On the droplet I used a docker-compose.yml similar to the course example, but with my own image.\nI’m also running Postgres on the droplet, but it’s firewall-blocked so it’s not exposed publicly — it’s only meant to be reachable from the server/internal network.\nMy API container ended up roughly like this (redacted):\nmaintenancelog: image: \u0026#34;myuser\u0026#34;/maintenancelog:latest container_name: maintenancelog environment: - DEPLOYED=true - DB_NAME=... - DB_USERNAME=... - DB_PASSWORD=... - CONNECTION_STR=... - SECRET_KEY=... - ISSUER=... - TOKEN_EXPIRE_TIME=... I’m intentionally not listing port mappings here — the important part is that the API runs inside the Docker network and is reached through a reverse proxy.\nMy Caddyfile is basically:\nmaintenancelog.heltsort.dk { reverse_proxy maintenancelog:7070 } So Caddy acts as the front door (reverse proxy + TLS), and Watchtower handles automatic redeploys when the Docker Hub image updates.\nThe Annoying Part: Tests Blocking Deployment # The biggest pain point this week was not Docker — it was the fact that the pipeline is set up correctly: tests must pass before deployment happens.\nI hit two issues:\nTests timing out After adding JWT auth/authorization, my integration tests got heavier. Some runs started timing out because the test setup now includes token creation and authenticated requests.\nGitHub Actions env vars Locally everything worked, but in GitHub Actions the build/test step didn’t have the environment variables it needed. That caused tests to fail only in CI, and I couldn’t reproduce it locally at first.\nThe takeaway for me: when your build depends on env vars, you need to treat CI like a separate environment and be explicit about what it gets (secrets/env vars), otherwise you end up chasing “works on my machine” failures.\nWhat I Learned # A “real” deployment isn’t just shipping code — it’s making the system repeatable: build, test, package, run. Gating Docker pushes on tests is painful when tests are flaky… but it’s the right kind of painful. Caddy + Watchtower + Docker Compose is a nice combo for a small API: simple mental model, and updates are basically automatic once it’s set up. ","date":"29 March 2026","externalUrl":null,"permalink":"/Portfolio/devlog/maintenancelog-seventhweek/","section":"Devlogs","summary":"Devlog Week 7: Deployment, CI/CD \u0026 Running It For Real # This week was all about getting the project out of my IDE and onto an actual server. The goal wasn’t to add new domain features — it was to make the API build, ship, and update automatically.\n","title":"Maintenance Log - Seventh Week: Deployment, CI/CD \u0026 Running It For Real","type":"devlog"},{"content":" Links # Code repository: MaintenanceLog (GitHub) Hand-in video: Video (Youtube) What this page is # This page contains the specific devlog posts from my portfolio that I want to include in the exam. The goal is to show the development of my application over time, including what I built, what I learned, and the technical decisions I made along the way.\nAlso worth noting: the README in the repo has been updated over time to match the exam/hand-in requirements, and the longer technical rundowns in these posts are there for depth — not required reading.\nWhy these posts # Week 1 establishes the baseline: initial scope, core domain concepts, and early architectural decisions (entities, relationships, immutability/soft-delete direction). Week 5 focuses on architecture refinement and testing strategy (ISP-driven interface design, RestAssured + Testcontainers integration tests, DTO mapper pattern). Week 6 shows a “deployment-ready” step where I secured the API with JWT authentication and role-based authorization, and how that was integrated cleanly without breaking the existing structure/tests. ","date":"23 March 2026","externalUrl":null,"permalink":"/Portfolio/tags/exam-portfolio/","section":"Tags","summary":"Links # Code repository: MaintenanceLog (GitHub) Hand-in video: Video (Youtube) What this page is # This page contains the specific devlog posts from my portfolio that I want to include in the exam. The goal is to show the development of my application over time, including what I built, what I learned, and the technical decisions I made along the way.\n","title":"Exam Portfolio","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/Portfolio/tags/jwt/","section":"Tags","summary":"","title":"JWT","type":"tags"},{"content":" Devlog Week 6: JWT Authentication \u0026amp; Role-Based Authorization # This week was entirely dedicated to building a deployment-ready authentication and authorization system. No new domain features—just securing everything that already exists. The focus was on JWT tokens, role hierarchies, and integrating security seamlessly into the existing architecture without breaking tests or existing functionality.\nWhat Changed This Week # Complete JWT authentication system: Login endpoint, token generation, token verification Role-based authorization: ADMIN \u0026gt; MANAGER \u0026gt; TECHNICIAN \u0026gt; AUTHENTICATED hierarchy Security integration: beforeMatched/afterMatched hooks in Javalin request lifecycle User → Employee refactoring: Renamed all user-related code to \u0026ldquo;employee\u0026rdquo; for clarity with library DTO naming DTO conversion layer: Domain DTOs separated from library DTOs with explicit conversion All routes protected: Role requirements on every endpoint except login Test infrastructure updated: All tests now authenticate before making requests The Security Layer Problem # This week wasn\u0026rsquo;t about learning JWT from scratch — our course uses a small helper library provided by our lecturer (dk.bugelhartmann.TokenSecurity) so issuing and validating tokens is pretty painless.\nThe real work was fitting authentication + authorization into a codebase that already had a nice separation of concerns, without turning the whole project into \u0026ldquo;security code everywhere\u0026rdquo;.\nWhat I needed to add (without wrecking anything): I already had a working REST API with controllers, services, DAOs, and a solid test suite. Now it needed to:\nAdd authentication (who are you?) Add authorization (what can you do?) Keep existing behavior intact Update 100+ tests to include tokens Keep the architecture clean (no library DTOs leaking into controllers/services) The constraint: The helper library has its own DTO shape (dk.bugelhartmann.UserDTO) while my domain uses EmployeeDTO. Bridging the two cleanly required a conversion layer so the rest of the application doesn\u0026rsquo;t take a dependency on the library.\nAuthentication vs Authorization: The Javalin Lifecycle # Request flow\nWhen a request hits the server, Javalin processes it in this order:\n1. beforeMatched handlers ← Run BEFORE finding which route to use 2. Route matching ← Javalin finds the endpoint 3. afterMatched handlers ← Run AFTER route found, BEFORE controller 4. Endpoint handler ← Your controller method 5. after handlers ← Cleanup Why does this matter? Because authentication and authorization need different information, and Javalin only exposes some of it after routing.\nAuthentication (beforeMatched)\nQuestion: \u0026ldquo;Is this token valid?\u0026rdquo;\nWhat it needs:\nToken from Authorization header Secret key to verify signature Current timestamp to check expiration What it doesn\u0026rsquo;t need:\nWhich endpoint is being called What roles are required What the request body contains The one practical exception: you still keep a tiny allowlist of open endpoints (like POST /auth/login) that should skip token checks.\n@Override public void authenticate(Context ctx) { // OPTIONS requests (CORS preflight) skip authentication if (ctx.method().toString().equals(\u0026#34;OPTIONS\u0026#34;)) { ctx.status(200); return; } // Open endpoints (e.g. login) skip authentication if (isOpenEndpoint(ctx)) { return; } // Extract and verify token UserDTO verifiedTokenUser = validateAndGetUserFromToken(ctx); ctx.attribute(\u0026#34;user\u0026#34;, verifiedTokenUser); // Store for authorize() } Why beforeMatched?\nToken validation is the same regardless of which endpoint is being called If the token is invalid, reject immediately—don\u0026rsquo;t waste time routing Fast fail principle: cheapest check first Authorization (afterMatched)\nQuestion: \u0026ldquo;Does this user have permission for THIS specific endpoint?\u0026rdquo;\nWhat it needs:\nThe user (from authentication) The route\u0026rsquo;s required roles (ctx.routeRoles()) The catch: ctx.routeRoles() is only available after route matching. Before matching, Javalin doesn\u0026rsquo;t even know which endpoint you\u0026rsquo;re hitting.\n@Override public void authorize(Context ctx) { Set\u0026lt;String\u0026gt; allowedRoles = ctx.routeRoles() // ← Only exists after matching .stream() .map(role -\u0026gt; role.toString().toUpperCase()) .collect(Collectors.toSet()); if (isOpenEndpoint(allowedRoles)) return; UserDTO user = ctx.attribute(\u0026#34;user\u0026#34;); // Get user from authenticate() if (user == null) { throw new ForbiddenResponse(\u0026#34;No user was added from the token\u0026#34;); } if (!userHasAllowedRole(user, allowedRoles)) { throw new ForbiddenResponse(\u0026#34;Unauthorized\u0026#34;); } } Why afterMatched?\nNeeds ctx.routeRoles() which only exists post-matching Can make role-specific authorization decisions Still runs before the controller, so unauthorized requests never reach business logic The pattern\nAuthenticate (beforeMatched): \u0026ldquo;Do I know who you are?\u0026rdquo;\nAuthorize (afterMatched): \u0026ldquo;Are you allowed to do this specific thing?\u0026rdquo;\nSplitting these concerns into separate lifecycle hooks keeps the code clean and testable.\nThe DTO Conversion Challenge # Two DTOs, same job\nThe JWT library uses its own DTO:\n// dk.bugelhartmann.UserDTO (library) public class UserDTO { private String username; private String password; private Set\u0026lt;String\u0026gt; roles; // Multiple roles as strings } My domain has a different structure:\n// app.dtos.EmployeeDTO (domain) public record EmployeeDTO( Integer id, String firstName, String lastName, String phone, String email, // ← Not \u0026#34;username\u0026#34; EmployeeRole role, // ← Single enum, not Set\u0026lt;String\u0026gt; boolean active ) {} The awkward bit: the token library works with dk.bugelhartmann.UserDTO, while the rest of my application speaks in EmployeeDTO. I didn\u0026rsquo;t want the library DTO creeping into controller/service code just because security needs it.\nOptions I considered:\nUse the library DTO throughout the app (easy, but it leaks a course helper into the domain) Fight the library and force everything to use my DTO (more work than it\u0026rsquo;s worth) Keep them separate and convert at the boundary I went with option 3.\nConversion layer: created a single conversion method:\nprivate dk.bugelhartmann.UserDTO convertToLibraryDTO(EmployeeDTO employeeDTO) { return new dk.bugelhartmann.UserDTO( employeeDTO.email(), // Maps to username Set.of(employeeDTO.role().name()) // Single role as Set\u0026lt;String\u0026gt; ); } Used only when creating tokens:\nprivate String createToken(EmployeeDTO employeeDTO) { // Convert domain DTO to library DTO dk.bugelhartmann.UserDTO libraryDTO = convertToLibraryDTO(employeeDTO); // Library creates token return tokenSecurity.createToken(libraryDTO, ISSUER, TOKEN_EXPIRE_TIME, SECRET_KEY); } For token verification, the library DTO stays internal:\nprivate dk.bugelhartmann.UserDTO verifyToken(String token) { if (tokenSecurity.tokenIsValid(token, SECRET_KEY) \u0026amp;\u0026amp; tokenSecurity.tokenNotExpired(token)) { return tokenSecurity.getUserWithRolesFromToken(token); // Returns library DTO } throw new ApiException(403, \u0026#34;Token is not valid\u0026#34;); } Result: Controllers and services never see the library DTO. It\u0026rsquo;s purely an internal security layer concern.\nRole Hierarchy Without Multiple Roles # In this system, each employee has exactly one role: ADMIN, MANAGER, or TECHNICIAN. But access control follows a hierarchy:\nADMIN can do everything MANAGER and TECHNICIAN can do MANAGER can do everything TECHNICIAN can do TECHNICIAN can only do TECHNICIAN things The constraint: the library supports Set\u0026lt;String\u0026gt; roles, but my domain model stores a single EmployeeRole. I didn\u0026rsquo;t want to change the data model just to make the token shape happy.\nInstead of storing multiple roles anywhere, I just expand permissions at authorization time.\nStore only the actual role in the token:\n// Token contains just the employee\u0026#39;s real role Set.of(employeeDTO.role().name()) // [\u0026#34;ADMIN\u0026#34;] or [\u0026#34;MANAGER\u0026#34;] or [\u0026#34;TECHNICIAN\u0026#34;] Expand the role during authorization:\nprivate static final Map\u0026lt;String, Set\u0026lt;String\u0026gt;\u0026gt; ROLE_HIERARCHY = Map.of( \u0026#34;ADMIN\u0026#34;, Set.of(\u0026#34;ADMIN\u0026#34;, \u0026#34;MANAGER\u0026#34;, \u0026#34;TECHNICIAN\u0026#34;, \u0026#34;AUTHENTICATED\u0026#34;), \u0026#34;MANAGER\u0026#34;, Set.of(\u0026#34;MANAGER\u0026#34;, \u0026#34;TECHNICIAN\u0026#34;, \u0026#34;AUTHENTICATED\u0026#34;), \u0026#34;TECHNICIAN\u0026#34;, Set.of(\u0026#34;TECHNICIAN\u0026#34;, \u0026#34;AUTHENTICATED\u0026#34;) ); private static boolean userHasAllowedRole(UserDTO user, Set\u0026lt;String\u0026gt; allowedRoles) { // Get user\u0026#39;s single role from token String userRole = user.getRoles().iterator().next(); // Expand via hierarchy Set\u0026lt;String\u0026gt; effectiveRoles = ROLE_HIERARCHY.getOrDefault(userRole, Set.of(userRole)); // Check if any effective role matches required roles return effectiveRoles.stream() .anyMatch(role -\u0026gt; allowedRoles.contains(role.toUpperCase())); } Example flow:\nEmployee logs in as ADMIN:\nToken contains: {username: \u0026quot;admin@example.com\u0026quot;, roles: [\u0026quot;ADMIN\u0026quot;]} Request to POST /assets (requires MANAGER):\nauthenticate() extracts user from token: roles = [\u0026quot;ADMIN\u0026quot;] authorize() gets required roles: [\u0026quot;MANAGER\u0026quot;] Expands ADMIN via hierarchy: [\u0026quot;ADMIN\u0026quot;, \u0026quot;MANAGER\u0026quot;, \u0026quot;TECHNICIAN\u0026quot;, \u0026quot;AUTHENTICATED\u0026quot;] Checks if expanded roles contain \u0026ldquo;MANAGER\u0026rdquo;: YES Request proceeds Net effect: one role in the database and token, but the permissions still behave like a hierarchy — and the logic lives in one place.\nThe AUTHENTICATED Role Pattern # Some endpoints should be accessible to any logged-in user, regardless of role:\nGET /employees — view employee directory GET /assets — view asset list GET /logs — view maintenance history The problem: how do you express \u0026ldquo;any logged-in user\u0026rdquo; without copy-pasting three roles onto every route?\nBad approach:\nget(employeeController::getAll, EmployeeRole.ADMIN, EmployeeRole.MANAGER, EmployeeRole.TECHNICIAN); This is verbose and brittle — if you ever add a new role, you\u0026rsquo;d have to hunt down every endpoint and update it.\nThe fix was adding a special AUTHENTICATED role.\nAdded to enum:\npublic enum EmployeeRole implements RouteRole { TECHNICIAN, MANAGER, ADMIN, AUTHENTICATED // ← Special: \u0026#34;any logged-in user\u0026#34; } Added to hierarchy:\nprivate static final Map\u0026lt;String, Set\u0026lt;String\u0026gt;\u0026gt; ROLE_HIERARCHY = Map.of( \u0026#34;ADMIN\u0026#34;, Set.of(\u0026#34;ADMIN\u0026#34;, \u0026#34;MANAGER\u0026#34;, \u0026#34;TECHNICIAN\u0026#34;, \u0026#34;AUTHENTICATED\u0026#34;), \u0026#34;MANAGER\u0026#34;, Set.of(\u0026#34;MANAGER\u0026#34;, \u0026#34;TECHNICIAN\u0026#34;, \u0026#34;AUTHENTICATED\u0026#34;), \u0026#34;TECHNICIAN\u0026#34;, Set.of(\u0026#34;TECHNICIAN\u0026#34;, \u0026#34;AUTHENTICATED\u0026#34;) ); Used in routes:\nget(employeeController::getAll, EmployeeRole.AUTHENTICATED); // ← Clean! How it works:\nAUTHENTICATED is in every role\u0026rsquo;s effective permissions Any logged-in user satisfies the requirement But unauthenticated requests still get rejected (no token = no roles = fail) Consolidating Employee Creation # Before this week, there were two ways to create employees:\nPOST /users via UserController → UserService.create() (Planned) POST /auth/register via SecurityController → SecurityService.register() The problem: duplicated logic. Both places would need to:\nHash passwords Check for duplicate emails Set default values (active = true) Create the employee entity Return a DTO So I consolidated it.\nRemoved:\nEmployeeService.create() method POST /employees endpoint EmployeeController.create() method Consolidated into:\nSecurityService.register() — only way to create employees @Override public EmployeeDTO register(CreateEmployeeRequest request) { // Check duplicate email if (secDAO.getByEmail(request.email()) != null) { throw new ApiException(409, \u0026#34;Email already exists\u0026#34;); } // Create employee with hashed password Employee employee = Employee.builder() .firstName(request.firstName()) .lastName(request.lastName()) .email(request.email()) .phone(request.phone()) .role(request.role()) // Admin can specify any role .password(hashPassword(request.password())) .active(true) .build(); Employee created = secDAO.create(employee); return EmployeeMapper.toDTO(created); } Protected the endpoint:\npost(\u0026#34;/register\u0026#34;, securityController::register, EmployeeRole.MANAGER); Result:\nOne source of truth for employee creation No duplicate password hashing logic Security service owns security-related operations (which is where it belongs) Managers/admins can create employees; technicians cannot Testing with Authentication # Once endpoints require tokens, any test that forgets the header gets an instant 403:\n// Before (worked last week) given() .when() .get(\u0026#34;/assets\u0026#34;) .then() .statusCode(200); // After (fails with 403) given() .when() .get(\u0026#34;/assets\u0026#34;) // ← No token! .then() .statusCode(200); // ← Gets 403 instead So the test setup now logs in first.\nUpdated TestPopulator to include passwords:\npublic static Map\u0026lt;String, Employee\u0026gt; populateEmployees(EntityManagerFactory emf) { String hashedPassword = SecurityServiceImpl.hashPassword(\u0026#34;password123\u0026#34;); Employee employee1 = Employee.builder() .email(\u0026#34;Johndoe@mail.dk\u0026#34;) .password(hashedPassword) // ← Added! .role(EmployeeRole.TECHNICIAN) .active(true) .build(); // ... more employees } Login in test setup:\nprivate static String authenticatedToken; private static String managerToken; private static String adminToken; @BeforeEach void setUp() { employees = TestPopulator.populateEmployees(emf); assets = TestPopulator.populateAssets(emf); // Get tokens for each role authenticatedToken = loginAsEmployee(\u0026#34;Johndoe@mail.dk\u0026#34;, \u0026#34;password123\u0026#34;); managerToken = loginAsEmployee(\u0026#34;Janedoe@mail.dk\u0026#34;, \u0026#34;password123\u0026#34;); adminToken = loginAsEmployee(\u0026#34;Jeffdoe@mail.dk\u0026#34;, \u0026#34;password123\u0026#34;); } private String loginAsEmployee(String email, String password) { return given() .contentType(\u0026#34;application/json\u0026#34;) .body(String.format(\u0026#34;\u0026#34;\u0026#34; { \u0026#34;email\u0026#34;: \u0026#34;%s\u0026#34;, \u0026#34;password\u0026#34;: \u0026#34;%s\u0026#34; } \u0026#34;\u0026#34;\u0026#34;, email, password)) .when() .post(\u0026#34;/auth/login\u0026#34;) .then() .statusCode(200) .extract() .path(\u0026#34;token\u0026#34;); } Use in tests:\n@Test void testGetAllActiveAssets() { given() .header(\u0026#34;Authorization\u0026#34;, \u0026#34;Bearer \u0026#34; + authenticatedToken) // ← Add token .when() .get(\u0026#34;/assets?active=true\u0026#34;) .then() .statusCode(200); } Testing authorization\nOnce I had tokens for each role, testing authorization became pretty straightforward:\n@Test void testPostAssetAsManager() { given() .header(\u0026#34;Authorization\u0026#34;, \u0026#34;Bearer \u0026#34; + managerToken) // ← Manager token .contentType(\u0026#34;application/json\u0026#34;) .body(\u0026#34;\u0026#34;\u0026#34;{\u0026#34;name\u0026#34;: \u0026#34;New Machine\u0026#34;, \u0026#34;active\u0026#34;: true}\u0026#34;\u0026#34;\u0026#34;) .when() .post(\u0026#34;/assets\u0026#34;) // Requires MANAGER .then() .statusCode(201); // Success } @Test void testPostAssetAsTechnicianFails() { given() .header(\u0026#34;Authorization\u0026#34;, \u0026#34;Bearer \u0026#34; + authenticatedToken) // ← Technician token .contentType(\u0026#34;application/json\u0026#34;) .body(\u0026#34;\u0026#34;\u0026#34;{\u0026#34;name\u0026#34;: \u0026#34;Should Fail\u0026#34;, \u0026#34;active\u0026#34;: true}\u0026#34;\u0026#34;\u0026#34;) .when() .post(\u0026#34;/assets\u0026#34;) // Requires MANAGER .then() .statusCode(403); // Forbidden } Result: all the existing tests were easy to update systematically, and it gave me confidence that the security rules were actually enforced end-to-end.\nThe User → Employee Refactoring # With the library using dk.bugelhartmann.UserDTO and my domain talking about users too, it got confusing fast. In code reviews it was always: \u0026ldquo;wait — which UserDTO is this?\u0026rdquo;\nThe confusion:\nimport dk.bugelhartmann.UserDTO; // Library import app.dtos.UserDTO; // Domain - COMPILE ERROR! // Now what? UserDTO user = ... // Which one?! The fix: rename my domain entities to Employee.\nRenamed:\nUser → Employee UserDTO → EmployeeDTO CreateUserRequest → CreateEmployeeRequest UserService → EmployeeService UserController → EmployeeController UserDAO → EmployeeDAO All tests, all routes, all references Result: much clearer intent. UserDTO is the library token DTO, EmployeeDTO is my domain DTO.\nAccess Control Matrix # Here\u0026rsquo;s the permission structure I ended up with:\nEndpoint Role Required Who Can Access Authentication POST /auth/login None Anyone POST /auth/register MANAGER Manager, Admin Employees GET /employees AUTHENTICATED All logged-in users GET /employees/{id} AUTHENTICATED All logged-in users PUT /employees/{id} MANAGER Manager, Admin DELETE /employees/{id} ADMIN Admin only PATCH /employees/{id} ADMIN Admin only Assets GET /assets AUTHENTICATED All logged-in users GET /assets/{id} AUTHENTICATED All logged-in users POST /assets MANAGER Manager, Admin PATCH /assets/{id} MANAGER Manager, Admin DELETE /assets/{id} ADMIN Admin only Logs (nested) GET /assets/{id}/logs AUTHENTICATED All logged-in users POST /assets/{id}/logs TECHNICIAN Technician, Manager, Admin Logs (standalone) GET /logs AUTHENTICATED All logged-in users GET /logs/{id} AUTHENTICATED All logged-in users GET /logs/employee/{id} MANAGER Manager, Admin The pattern: Read access for authenticated users, write access for appropriate roles, destructive actions for admins only.\nDesign Decisions This Week # 47. JWT Authentication via Lecturer-Provided Helper Library — Used dk.bugelhartmann.TokenSecurity for token generation/verification; conversion layer isolates library DTO from domain\n48. Security Service Owns Employee Creation — Consolidated all employee creation through /auth/register; removed EmployeeService.create() to avoid duplication\n49. beforeMatched for Authentication — Token validation runs before route matching; doesn\u0026rsquo;t need to know endpoint requirements\n50. afterMatched for Authorization — Role checking runs after route matching; requires ctx.routeRoles() which is only available post-match\n51. Library DTO Internal to Security Layer — dk.bugelhartmann.UserDTO used only for token operations; never exposed to controllers or returned in responses\n52. Domain DTO for All External Communication — EmployeeDTO used in controller responses and service layer; clean separation from library implementation details\n53. Conversion Method Pattern — convertToLibraryDTO() handles mapping between domain and library DTOs; single responsibility, easy to test\n54. Role Hierarchy via Map — Static map defines role inheritance; expanded during authorization check rather than storing multiple roles in token\n55. Single Role per Employee — Database stores one EmployeeRole enum; hierarchy expansion happens at authorization time, not in data model\n56. AUTHENTICATED Role for Any Logged-In User — Special role that all roles inherit; enables \u0026ldquo;authenticated but any role\u0026rdquo; endpoints without listing all roles\n57. Open Endpoints Have No Roles — Absence of roles = open endpoint; explicit rather than special marker role\n58. Soft Delete for Employees — Inactive employees can\u0026rsquo;t login (checked in login()); preserved in database for historical maintenance log references\n59. Login Returns Token + User — Client gets both JWT token and user details in single response; avoids extra request to fetch user data\n60. Password Hashing with BCrypt — Static hashPassword() method (salt factor 12); reused in service and test seeding\n61. Test Tokens Per Role — Each test class maintains tokens for TECHNICIAN, MANAGER, ADMIN; enables comprehensive role-based testing\n62. Seeded Test Employees with Known Passwords — TestPopulator creates employees with password123; enables login during test setup\nThoughts on Security Implementation # I expected adding auth to be one of those \u0026ldquo;touch everything, break everything\u0026rdquo; weeks, but it honestly went smoother than I thought — mostly because the structure was already doing its job. Having clear layers (routes → controllers → services → DAOs) meant I could bolt security on at the edges instead of sprinkling checks throughout business logic.\nThe biggest \u0026ldquo;cost\u0026rdquo; wasn\u0026rsquo;t getting it to work, it was making it feel clean: keeping authentication separate from authorization, keeping the lecturer-provided token types inside the security layer, and getting the role rules readable instead of turning every route into a list of three roles.\nThe beforeMatched/afterMatched split was the main mental hurdle. Once it clicked that route roles only exist after matching, the design stopped feeling like ceremony and started feeling like the request lifecycle doing me a favor.\nAnd yes — updating the tests was a lot of mechanical work, but it also felt like a good sign: if adding tokens to tests is mostly systematic, the API surface is probably consistent.\nNext Week # Building out the deployment pipeline. The core functionality is solid; time to harden it and prepare for production.\n","date":"23 March 2026","externalUrl":null,"permalink":"/Portfolio/devlog/maintenancelog-sixthweek/","section":"Devlogs","summary":"Devlog Week 6: JWT Authentication \u0026 Role-Based Authorization # This week was entirely dedicated to building a deployment-ready authentication and authorization system. No new domain features—just securing everything that already exists. The focus was on JWT tokens, role hierarchies, and integrating security seamlessly into the existing architecture without breaking tests or existing functionality.\n","title":"Maintenance Log - Sixth Week: JWT Authentication \u0026 Role-Based Authorization","type":"devlog"},{"content":"","date":"16 March 2026","externalUrl":null,"permalink":"/Portfolio/tags/architecture/","section":"Tags","summary":"","title":"Architecture","type":"tags"},{"content":" Devlog Week 5: Interface Refinement, REST API Testing \u0026amp; The Mapper Pattern # This week marked a significant shift from building features to refining architecture and establishing a robust testing foundation. The focus was on three major areas: granular interface design following ISP principles, comprehensive REST API testing with RestAssured, and implementing the DTO mapper pattern.\nWhat Changed This Week # Atomic interface design: Split IDAO\u0026lt;T\u0026gt; into ICreateDAO, IReadDAO, IUpdateDAO — services now depend only on what they use REST API integration tests: UserRoutes, AssetRoutes, MaintenanceLogRoutes tested with RestAssured + Testcontainers DTO Mapper pattern: Pure record DTOs with dedicated mapper classes for entity ↔ DTO conversion Constructor chaining: DependencyContainer supports both production and test configurations via this(emf) delegation JOIN FETCH strategy: Lazy loading issues resolved with query-specific eager fetching Interface Segregation: From Monolithic to Atomic # Last session ended with a discussion about the Interface Segregation Principle (ISP), and this week I took it to its logical conclusion. The original IDAO\u0026lt;T\u0026gt; interface was doing too much — forcing services to depend on methods they\u0026rsquo;d never call.\nThe Problem # My MaintenanceLogService needed to fetch Asset and User entities, but only for reading. Yet it depended on the full IDAO\u0026lt;T\u0026gt; interface with create(), update(), and other unused operations. ISP says: \u0026ldquo;Clients should not be forced to depend on methods they do not use.\u0026rdquo;\nThe Solution: Atomic Interfaces # Options considered:\nKeep IDAO\u0026lt;T\u0026gt; and accept overexposure (easy, but violates ISP) Split into atomic interfaces and compose as needed (more upfront work, enforces intent) I chose option 2. Break down into single-responsibility building blocks:\npublic interface ICreateDAO\u0026lt;T\u0026gt; { T create(T t); } public interface IReadDAO\u0026lt;T\u0026gt; { T get(Integer id); List\u0026lt;T\u0026gt; getAll(); } public interface IUpdateDAO\u0026lt;T\u0026gt; { T update(T t); } These compose where full create/read/update is needed:\npublic interface ICrudDAO\u0026lt;T\u0026gt; extends ICreateDAO\u0026lt;T\u0026gt;, IReadDAO\u0026lt;T\u0026gt;, IUpdateDAO\u0026lt;T\u0026gt; {} Note on deletion: There\u0026rsquo;s no IDeleteDAO. This system uses soft deletes (activation/deactivation) as domain commands, not CRUD operations. Users and Assets have active boolean fields; MaintenanceLogs are immutable (never deleted). Hard deletes don\u0026rsquo;t exist in the API — deactivation is a state change handled via update() or dedicated methods like setActive().\nEntity-specific interfaces combine base operations with domain queries:\npublic interface IUserDAO extends ICrudDAO\u0026lt;User\u0026gt;, IUserQueries {} public interface IAssetDAO extends ICreateDAO\u0026lt;Asset\u0026gt;, IReadDAO\u0026lt;Asset\u0026gt;, IAssetQueries {} public interface IMaintenanceLogDAO extends ICreateDAO\u0026lt;MaintenanceLog\u0026gt;, IReadDAO\u0026lt;MaintenanceLog\u0026gt;, IMaintenanceLogQueries {} Notice Asset doesn\u0026rsquo;t extend IUpdateDAO — assets are immutable after creation (only activation status changes).\nServices Declare Minimal Dependencies # public class MaintenanceLogServiceImpl { private final IMaintenanceLogDAO logDao; // Full operations private final IReadDAO\u0026lt;Asset\u0026gt; assetDao; // Read-only! private final IReadDAO\u0026lt;User\u0026gt; userDao; // Read-only! } Why not use IAssetDAO here? The service only needs get() and getAll(). Depending on IReadDAO\u0026lt;Asset\u0026gt; instead of IAssetDAO:\nMakes intent explicit (this service doesn\u0026rsquo;t create/update assets) Reduces coupling (service doesn\u0026rsquo;t depend on asset-specific queries it doesn\u0026rsquo;t use) Enables compiler enforcement (can\u0026rsquo;t accidentally call assetDao.create()) The compiler now prevents calling assetDao.create() or userDao.update() in the log service — those methods don\u0026rsquo;t exist on IReadDAO\u0026lt;T\u0026gt;.\nREST API Testing with RestAssured # With the REST layer complete from last week, this week was all about verification. I set up comprehensive integration tests using RestAssured, Testcontainers, and JUnit 5.\nTest Infrastructure # Each route group gets its own test class with isolated test ports to avoid Javalin binding conflicts when tests run in parallel:\n@BeforeAll public static void init() { emf = HibernateTestConfig.getEntityManagerFactory(); // Testcontainers PostgreSQL container = new DependencyContainer(emf); app = AppConfig.start(container, 7071); // UserRoutesTest: 7071, AssetRoutesTest: 7072, etc. RestAssured.baseURI = \u0026#34;http://localhost:7071\u0026#34;; RestAssured.basePath = \u0026#34;/\u0026#34; + Routes.getApiVersion(); } @BeforeEach void setUp() { seeded = TestPopulator.populateUsers(emf); // TRUNCATE + fresh data each test } Testcontainers lifecycle: One PostgreSQL container per test class (@BeforeAll static setup). Container starts once, database is wiped before each test via TRUNCATE TABLE ... RESTART IDENTITY CASCADE. This prevents test pollution (leftover data from previous tests causing failures) while keeping startup overhead minimal.\nWhy separate ports? If multiple test classes run in parallel (Maven Surefire default), they\u0026rsquo;d conflict trying to bind to the same port. Each class gets its own port (7071, 7072, 7073) to enable parallel execution.\nThe TestPopulator ensures predictable state — auto-incrementing IDs reset, foreign key constraints maintained, no orphaned data.\nConstructor Chaining for Testability # To support both production and test configurations, I refactored DependencyContainer:\npublic DependencyContainer() { this(HibernateConfig.getEntityManagerFactory()); // Production } public DependencyContainer(EntityManagerFactory emf) { // Actual wiring logic (used by both constructors) } Production calls new DependencyContainer(). Tests inject a Testcontainers-backed EMF via new DependencyContainer(testEmf). Single source of truth for wiring.\nTest Coverage # Each endpoint tested for:\nHappy paths (200/201/204 with correct data) Query parameters (filtering, limits) Edge cases (404, 409, 400) Idempotency Happy path example:\n@Test void testGetAllActiveUsers() { given() .when() .get(\u0026#34;/users?active=true\u0026#34;) .then() .statusCode(200) .body(\u0026#34;email\u0026#34;, containsInAnyOrder( seeded.values().stream() .filter(User::isActive) .map(User::getEmail) .toArray() )); } Edge case example:\n@Test void postExistingEmailReturns409() { User user1 = seeded.get(\u0026#34;user1\u0026#34;); given() .contentType(\u0026#34;application/json\u0026#34;) .body(String.format(\u0026#34;\u0026#34;\u0026#34; { \u0026#34;firstName\u0026#34;: \u0026#34;Test\u0026#34;, \u0026#34;lastName\u0026#34;: \u0026#34;User\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;%s\u0026#34;, \u0026#34;phone\u0026#34;: \u0026#34;12345678\u0026#34;, \u0026#34;role\u0026#34;: \u0026#34;TECHNICIAN\u0026#34;, \u0026#34;password\u0026#34;: \u0026#34;password123\u0026#34; } \u0026#34;\u0026#34;\u0026#34;, user1.getEmail())) .when() .post(\u0026#34;/users\u0026#34;) .then() .statusCode(409); } The Lazy Loading Gotcha (Again) # Hit a classic Hibernate issue during testing: accessing asset.getLogs() after the EntityManager closed threw LazyInitializationException.\nWeek 4 solution: Calculate lastLogDate in the service layer while the persistence context is active.\nWeek 5 solution: Use JOIN FETCH in the DAO for queries that need relationship data:\n@Override public Asset get(Integer id) { TypedQuery\u0026lt;Asset\u0026gt; query = em.createQuery( \u0026#34;SELECT a FROM Asset a LEFT JOIN FETCH a.logs WHERE a.assetId = :id\u0026#34;, Asset.class ); // ... } Rule of thumb:\nUse JOIN FETCH when the API contract requires relationship data and you want the DAO to guarantee it\u0026rsquo;s loaded Use service-layer calculation when you only need a derived scalar (like lastLogDate) and don\u0026rsquo;t want to widen fetch graphs for all callers This isn\u0026rsquo;t \u0026ldquo;defeating\u0026rdquo; lazy loading — it\u0026rsquo;s consciously choosing the right fetch strategy per query. Other queries that don\u0026rsquo;t need logs still use lazy loading by default.\nThe Mapper Pattern: Pure DTOs \u0026amp; Separation of Concerns # My lecturer suggested extracting DTO construction logic into dedicated mapper classes. The goal: make DTOs pure data carriers while keeping conversion concerns separate from both entities and services.\nAlternatives Considered # Constructors in DTO (Week 1–4 approach): Couples DTO to entity, mixes data structure with conversion logic Mapping in service layer: Repetitive, spreads mapping logic across services Dedicated mapper class: Single responsibility, reusable, testable I chose option 3.\nBefore: Constructor-Based Conversion # public record UserDTO(/* fields */) { public UserDTO(User user) { // Constructor does the mapping this(user.getId(), user.getFirstName(), ...); } } After: Mapper Classes # Pure record:\npublic record UserDTO( Integer id, String firstName, String lastName, String phone, String email, UserRole role, boolean active ) {} Dedicated mapper:\npublic class UserMapper { public static UserDTO toDTO(User user) { return new UserDTO( user.getUserId(), user.getFirstName(), user.getLastName(), user.getPhone(), user.getEmail(), user.getRole(), user.isActive() ); } } When to Map DTO → Entity (or Not) # Most mappers only need toDTO() methods. For User, the create() service method handles entity construction manually because it needs to hash the password and set defaults — business logic that doesn\u0026rsquo;t belong in a mapper.\nThe exception is AssetMapper, which has toEntity() since asset creation is straightforward mapping with no special logic.\nHandling Calculated Fields # AssetDTO includes lastLogDate, which isn\u0026rsquo;t stored — it\u0026rsquo;s calculated from logs. The mapper takes it as an optional parameter:\npublic class AssetMapper { public static AssetDTO toDTO(Asset asset, LocalDateTime lastLogDate) { return new AssetDTO( asset.getAssetId(), asset.getName(), asset.getDescription(), asset.isActive(), lastLogDate ); } public static AssetDTO toDTO(Asset asset) { return toDTO(asset, null); // For list views } } Service calculates, mapper structures:\npublic AssetDTO get(Integer id) { Asset asset = assetDao.get(id); LocalDateTime lastLogDate = null; if (!asset.getLogs().isEmpty()) { lastLogDate = asset.getLogs().get(0).getPerformedDate(); } return AssetMapper.toDTO(asset, lastLogDate); } Mapper Pattern Tradeoffs # Why static methods?\nPros:\nSimple (no DI needed) Easy to call in streams (map(AssetMapper::toDTO)) No lifecycle management Cons:\nHarder to mock in tests Can grow into \u0026ldquo;god mapper\u0026rdquo; if entity relationships get complex No polymorphism (can\u0026rsquo;t swap implementations) For this project, the simplicity wins. If mapping logic gets complex (multiple DTO variants per entity, conditional mapping), I\u0026rsquo;d reconsider instance-based mappers with DI.\nDesign Decisions This Week # 35. Atomic Interface Design — Split IDAO\u0026lt;T\u0026gt; into ICreateDAO, IReadDAO, IUpdateDAO; services depend on minimal interfaces (e.g., IReadDAO\u0026lt;Asset\u0026gt; instead of full DAO)\n36. Interface Composition — ICrudDAO\u0026lt;T\u0026gt; composes atomic interfaces; entity-specific DAOs extend base + queries (e.g., IUserDAO extends ICrudDAO\u0026lt;User\u0026gt;, IUserQueries)\n37. No Hard Deletes — System uses soft deletes (activation/deactivation) as domain commands; no IDeleteDAO or hard delete operations in API\n38. Constructor Chaining for Tests — Production constructor delegates to test constructor: this(emf) pattern enables dependency injection for testing\n39. RestAssured Test Structure — One test class per route group, fresh database state via TRUNCATE in @BeforeEach, separate test ports per class (7071, 7072, 7073) to enable parallel execution\n40. Testcontainers Lifecycle — One PostgreSQL container per test class, started in @BeforeAll; database wiped before each test to prevent state leakage\n41. Query-Specific Fetch Strategy — Use JOIN FETCH when API contract requires relationship data; use service-layer calculation for derived scalars; keep lazy loading as default\n42. Mapper Pattern for DTOs — Pure record DTOs with no constructor logic; mapper classes handle entity ↔ DTO conversion with static methods\n43. Bidirectional Mapping When Needed — Most mappers only need toDTO(); toEntity() only when conversion is straightforward (e.g., AssetMapper has it, UserMapper doesn\u0026rsquo;t)\n44. Calculated Fields as Parameters — Mapper methods accept calculated fields (e.g., lastLogDate) as parameters; service layer handles calculation, mapper handles structure\n45. Method Overloading for Optional Fields — Overloaded mapper methods for with/without optional parameters; enables method references (map(AssetMapper::toDTO))\n46. Static Mappers for Simplicity — Static mapper methods chosen over instance-based for ease of use in streams; trade-off accepted: harder to mock, risk of \u0026ldquo;god mapper\u0026rdquo; growth\nPending Items # JWT authentication (token issuing + validation) Protect selected routes with Authorization: Bearer \u0026lt;token\u0026gt; Thoughts on Testing # Writing tests highlighted how valuable the layered architecture is. Each layer has clear responsibilities: DAOs worry about persistence, services handle business logic, controllers manage HTTP concerns, mappers structure data. When a test fails, it\u0026rsquo;s immediately obvious which layer has the problem. This clarity is the payoff for all the interface design work.\nNext Week # Implementing JWT-based authentication: issuing tokens on login and validating Authorization: Bearer \u0026lt;token\u0026gt; on protected endpoints.\n","date":"16 March 2026","externalUrl":null,"permalink":"/Portfolio/devlog/maintenancelog-fifthweek/","section":"Devlogs","summary":"Devlog Week 5: Interface Refinement, REST API Testing \u0026 The Mapper Pattern # This week marked a significant shift from building features to refining architecture and establishing a robust testing foundation. The focus was on three major areas: granular interface design following ISP principles, comprehensive REST API testing with RestAssured, and implementing the DTO mapper pattern.\n","title":"Maintenance Log - Fifth Week: Interface Refinement, REST API Testing \u0026 The Mapper Pattern","type":"devlog"},{"content":"","date":"16 March 2026","externalUrl":null,"permalink":"/Portfolio/tags/testing/","section":"Tags","summary":"","title":"Testing","type":"tags"},{"content":"","date":"9 March 2026","externalUrl":null,"permalink":"/Portfolio/tags/javalin/","section":"Tags","summary":"","title":"Javalin","type":"tags"},{"content":"","date":"9 March 2026","externalUrl":null,"permalink":"/Portfolio/tags/logging/","section":"Tags","summary":"","title":"Logging","type":"tags"},{"content":" Devlog Week 4: Building a Production-Style REST API # Welcome back! This week was all about taking the solid persistence layer and API integration work from previous weeks and exposing it all through a proper REST API. The goal: build a complete, well-architected HTTP interface using Javalin 7.0.1 that follows REST principles and industry best practices.\nSpoiler: it also turned into a significant refactor.\nTL;DR (what changed this week):\nREST API layer added on top of the persistence/integration work Routes reorganized + versioned under /api/v1 (Javalin 7-style config) DTO strategy tightened (request vs response where shapes differ) Validation split into controller (input) vs service (business rules) Centralized exception handling + proper logging (Logback) Main cleaned up via AppConfig + DependencyContainer Javalin 7.0.1: New Version, New Rules # First hurdle: Javalin 7.0.1 introduced some breaking changes from earlier versions that I had to work through.\nThe big one: In Javalin 7, routes need to be registered inside the config.routes block during app creation. Adding routes after calling .start() no longer works.\nJavalin app = Javalin.create(config -\u0026gt; { config.routes.apiBuilder(routes.getRoutes()); // ← Must be here }).start(7070); // app.get(\u0026#34;/test\u0026#34;, handler); ← This no longer works! This actually forced better architecture — all routing configuration happens upfront in one place, which makes the setup more explicit and easier to reason about.\nAPI Versioning from Day One\nI decided to version the API right from the start with an /api/v1/ prefix on all endpoints:\n/api/v1/users /api/v1/assets /api/v1/logs Rationale: Even if this is a school project, building the habit of versioning APIs is important. If I ever need to make breaking changes later (different response structure, new validation rules, etc.), I can introduce /api/v2/ without breaking existing clients.\nRoute Organization: Keeping Things Modular # With three entities (User, Asset, MaintenanceLog), I needed a clean way to organize routes without having one giant file with 15+ endpoint definitions.\nThe pattern I landed on:\ncontrollers/routes/ ├── Routes.java # Aggregator ├── UserRoutes.java # /api/v1/users ├── AssetRoutes.java # /api/v1/assets + /{id}/logs └── MaintenanceLogRoutes.java # /api/v1/logs Each route class returns an EndpointGroup:\npublic class UserRoutes { public EndpointGroup getRoutes() { return () -\u0026gt; { path(\u0026#34;api/v1/users\u0026#34;, () -\u0026gt; { get(userController::getAll); get(\u0026#34;/{id}\u0026#34;, userController::get); post(userController::create); put(\u0026#34;/{id}\u0026#34;, userController::update); delete(\u0026#34;/{id}\u0026#34;, userController::deactivate); patch(\u0026#34;/{id}\u0026#34;, userController::activate); }); }; } } Then Routes.java aggregates them all:\npublic EndpointGroup getRoutes() { return () -\u0026gt; { userRoutes.getRoutes().addEndpoints(); assetRoutes.getRoutes().addEndpoints(); logRoutes.getRoutes().addEndpoints(); }; } Why this matters: Each entity\u0026rsquo;s routes are self-contained. If I need to change how assets work, I only touch AssetRoutes.java. No risk of accidentally breaking user endpoints.\nDTO Strategy: Security and Separation of Concerns # One of the more interesting design decisions this week: when do you use the same DTO for requests and responses, and when do you split them?\nRule of thumb: I split DTOs whenever the shape of the request and response should differ — either because of sensitive fields (only allowed inbound) or server-owned fields (IDs, derived/calculated fields, etc.).\nUser API: Split DTOs\nFor users, I needed separate request and response DTOs because of passwords:\n// Request DTO (used for POST /users) public class CreateUserRequest { private String firstName; private String lastName; private String email; private String phone; private UserRole role; private String password; // ← Only in requests! } // Response DTO (used for all GET endpoints) public class UserDTO { private Integer id; private String firstName; private String lastName; private String phone; private String email; private UserRole role; private boolean active; // NO password field — never exposed } The password flow:\nClient sends CreateUserRequest with plain text password Service layer hashes it immediately with BCrypt UserDTO returned in response (no password) Password never appears in any response Assets/Logs: Shared read DTOs + create-specific request DTOs\nFor assets and logs, there\u0026rsquo;s no sensitive data, so I can generally reuse the same DTO for read endpoints (and for any operations where the request/response fields match). That keeps things simple and avoids needless duplication.\nFor create endpoints, I still prefer a dedicated request DTO (like CreateLogRequest) because the client shouldn\u0026rsquo;t be sending fields like IDs, and sometimes the API returns extra data that isn\u0026rsquo;t part of the input.\nNested Resources: When REST Gets Interesting # A key architectural decision this week was how to structure log endpoints.\nThe use case: In the UI, you select an asset and then view its logs. The primary access pattern is asset-centric, not log-centric.\nTwo options:\nOption A: Only nested routes\nGET /api/v1/assets/{id}/logs Option B: Both nested AND standalone routes\n# Asset-scoped GET /api/v1/assets/{id}/logs # Cross-asset queries GET /api/v1/logs?userId=42 GET /api/v1/logs?status=COMPLETED I went with Option B — both structures.\nRationale:\nMost of the time, you\u0026rsquo;re looking at logs for a specific asset (nested routes) But sometimes you need cross-cutting queries: \u0026ldquo;show me all logs by this technician\u0026rdquo; or \u0026ldquo;show me all incomplete maintenance tasks\u0026rdquo; Having both gives maximum flexibility without forcing awkward query parameters Implementation:\n// AssetRoutes.java path(\u0026#34;api/v1/assets\u0026#34;, () -\u0026gt; { // ... asset endpoints ... path(\u0026#34;/{id}/logs\u0026#34;, () -\u0026gt; { get(logController::getLogsByAsset); post(logController::createLogForAsset); }); }); // MaintenanceLogRoutes.java path(\u0026#34;api/v1/logs\u0026#34;, () -\u0026gt; { get(logController::getAll); get(\u0026#34;/{id}\u0026#34;, logController::get); get(\u0026#34;/user/{userId}\u0026#34;, logController::getByUser); }); Design constraint: Logs can ONLY be created via /assets/{id}/logs. This enforces that every log is attached to an asset from the start — you can\u0026rsquo;t accidentally create orphaned logs.\nValidation: Two Layers, Two Responsibilities # I split validation into two distinct layers with different responsibilities:\nController Layer: Input Validation\nThis is where I check that the request is structurally valid:\nCreateUserRequest request = ctx.bodyValidator(CreateUserRequest.class) .check(dto -\u0026gt; dto.getFirstName() != null, \u0026#34;First name is required\u0026#34;) .check(dto -\u0026gt; dto.getEmail() != null, \u0026#34;Email is required\u0026#34;) .check(dto -\u0026gt; dto.getEmail().matches(\u0026#34;^[^@\\\\s]+@[^@\\\\s]+\\\\.[^@\\\\s]+$\u0026#34;), \u0026#34;Invalid email format\u0026#34;) .get(); Service Layer: Business Validation\nThis is where I check business rules:\npublic UserDTO create(CreateUserRequest request) { // Business rule: email must be unique if (userDaoExpanded.getByEmail(request.getEmail()) != null) { throw new ApiException(409, \u0026#34;Email already exists\u0026#34;); } // ... } Why separate them?\nInput validation catches malformed requests before they hit business logic Business validation enforces domain rules (uniqueness, referential integrity, etc.) Service layer can be tested independently of HTTP concerns Clear separation of concerns The Lazy Loading Problem (And How I Solved It) # Hit an interesting issue when implementing the Asset DTO.\nThe requirement: When you GET an asset, include the date of its most recent maintenance log.\nNote: This assumes the logs collection is ordered newest-first (for example via @OrderBy(\u0026quot;performedDate DESC\u0026quot;), or by fetching the latest log with a query).\nFirst attempt:\npublic class AssetDTO { private LocalDateTime lastLogDate; public AssetDTO(Asset asset) { // ... this.lastLogDate = asset.getLogs().isEmpty() ? null : asset.getLogs().get(0).getPerformedDate(); } } The problem: asset.getLogs() triggers lazy loading. If the persistence context/session is already closed (for example, after the DAO call returns), this throws LazyInitializationException.\nThe solution: Calculate it in the service layer while you\u0026rsquo;re still inside an active persistence context:\npublic AssetDTO get(Integer id) { Asset asset = assetDao.get(id); LocalDateTime lastLogDate = null; if (!asset.getLogs().isEmpty()) { // ← Persistence context still active here lastLogDate = asset.getLogs().get(0).getPerformedDate(); } AssetDTO dto = new AssetDTO(asset); dto.setLastLogDate(lastLogDate); // ← Set after construction return dto; } Lesson learned: DTOs should be \u0026ldquo;dumb\u0026rdquo; data containers. Any logic that requires database access belongs in the service layer, not the DTO constructor.\nQuery Parameters: UI-Friendly Filtering # For the asset list endpoint, I wanted the UI to have a dropdown: \u0026ldquo;Show all / Show active / Show inactive\u0026rdquo;.\nImplementation:\n// Controller public void getAll(Context ctx) { String activeParam = ctx.queryParam(\u0026#34;active\u0026#34;); Boolean active = activeParam != null ? Boolean.parseBoolean(activeParam) : null; ctx.json(assetService.getAll(active)); } // Service public List\u0026lt;AssetDTO\u0026gt; getAll(Boolean active) { List\u0026lt;Asset\u0026gt; assets; if (active == null) { assets = assetDao.getAll(); // All assets } else { assets = assetDaoExpanded.getAllByStatus(active); // Filtered } return assets.stream().map(AssetDTO::new).toList(); } This gives three API calls:\nGET /api/v1/assets → All assets GET /api/v1/assets?active=true → Active only GET /api/v1/assets?active=false → Inactive only Exception Handling: Centralized and Logged # Rather than handling exceptions in every controller method, I configured them once in the Javalin setup:\nconfig.routes.exception(DatabaseException.class, (e, ctx) -\u0026gt; { int statusCode = switch (e.getErrorType()) { case NOT_FOUND -\u0026gt; 404; case CONSTRAINT_VIOLATION -\u0026gt; 409; case CONNECTION_FAILURE -\u0026gt; 503; case TRANSACTION_FAILURE, QUERY_FAILURE, UNKNOWN -\u0026gt; 500; }; if (statusCode \u0026gt;= 500) { log.error(\u0026#34;Database error [{}]: {}\u0026#34;, e.getErrorType(), e.getMessage(), e); } else { log.warn(\u0026#34;Database error [{}]: {}\u0026#34;, e.getErrorType(), e.getMessage()); } ctx.status(statusCode).json(Map.of(\u0026#34;status\u0026#34;, statusCode, \u0026#34;msg\u0026#34;, e.getMessage())); }); Benefits:\nDRY — exception mapping in one place Consistent error responses across all endpoints Automatic logging with appropriate levels (ERROR for 500s, WARN for 4xxs) SLF4J logs full cause chain when exception is passed as last parameter Enum validation:\nOne thing I had to add: try-catch blocks for enum parsing:\ntry { LogStatus status = LogStatus.valueOf(statusParam.toUpperCase()); ctx.json(logService.getByStatus(status)); } catch (IllegalArgumentException e) { throw new ApiException(400, \u0026#34;Invalid status value\u0026#34;); } Without this, sending ?status=INVALID would crash with an unhandled exception. Now it returns a clean 400 error.\nLogging: SLF4J + Logback # Set up proper logging this week with Logback:\nConfiguration highlights:\nConsole appender for development (immediate feedback) Rolling file appender for production (persistent logs) Daily rolling: javalin-app.log → javalin-app-YYYY-MM-DD.log 30 days retention (automatic cleanup) App package at DEBUG, frameworks at WARN (reduce noise) The Great Refactoring: Cleaning Up Main # At this point, Main.java had grown to ~80 lines of dependency wiring and configuration. It looked like this:\npublic static void main(String[] args) { IDAO\u0026lt;User\u0026gt; userDao = new UserDAO(emf); IUserDAO userDaoExpanded = new UserDAO(emf); UserService userService = new UserServiceImpl(userDao, userDaoExpanded); UserController userController = new UserController(userService); // ... repeat for Asset and Log ... Javalin app = Javalin.create(config -\u0026gt; { // 50+ lines of exception handlers and configuration }).start(7070); } The problem: Main was doing too much. It was responsible for:\nCreating all DAOs, services, and controllers Configuring Javalin (plugins, routes, exception handlers) Starting the server The refactoring:\nCreated two new classes to separate concerns:\nDependencyContainer.java — handles all object creation:\npublic class DependencyContainer { private static final EntityManagerFactory emf = HibernateConfig.getEntityManagerFactory(); private final UserDAO userDao; private final AssetDAO assetDao; private final MaintenanceLogDAO logDao; private final UserService userService; private final AssetService assetService; private final MaintenanceLogService logService; private final UserController userController; private final AssetController assetController; private final MaintenanceLogController logController; public DependencyContainer() { // Create all dependencies in correct order this.userDao = new UserDAO(emf); // ... } public Routes getRoutes() { return new Routes(userController, assetController, logController); } } AppConfig.java — handles Javalin configuration:\npublic class AppConfig { public static void start(int port) { DependencyContainer container = new DependencyContainer(); Routes routes = container.getRoutes(); Javalin app = Javalin.create(config -\u0026gt; { configurePlugins(config); configureRoutes(config, routes); configureExceptionHandlers(config); }); app.start(port); } private static void configurePlugins(JavalinConfig config) { /* ... */ } private static void configureRoutes(JavalinConfig config, Routes routes) { /* ... */ } private static void configureExceptionHandlers(JavalinConfig config) { /* ... */ } } Main.java after refactoring:\npublic static void main(String[] args) { AppConfig.start(7070); log.info(\u0026#34;Server started on port 7070\u0026#34;); } Two lines. That\u0026rsquo;s it.\nBenefits:\nSingle Responsibility Principle: each class has one job Testable: can inject test dependencies into DependencyContainer Maintainable: clear separation makes changes easier Clean: Main is now just the entry point, nothing more Project Structure: The Final Cleanup # After all the refactoring, I reorganized the entire package structure for clarity:\napp/ ├── Main.java ├── config/ │ ├── AppConfig.java │ ├── DependencyContainer.java │ └── hibernate/ # Moved from persistence.config │ ├── HibernateConfig.java │ ├── EntityRegistry.java │ └── ... ├── controllers/ │ ├── UserController.java │ ├── AssetController.java │ ├── MaintenanceLogController.java │ └── routes/ │ ├── Routes.java # Aggregator │ ├── UserRoutes.java │ ├── AssetRoutes.java │ └── MaintenanceLogRoutes.java ├── dtos/ │ ├── UserDTO.java │ ├── CreateUserRequest.java │ ├── AssetDTO.java │ ├── MaintenanceLogDTO.java │ └── CreateLogRequest.java ├── entities/ │ ├── User.java │ ├── Asset.java │ ├── MaintenanceLog.java │ └── enums/ │ ├── UserRole.java │ ├── LogStatus.java │ └── TaskType.java ├── exceptions/ │ ├── ApiException.java │ ├── DatabaseException.java │ └── enums/ │ └── DatabaseErrorType.java ├── integration/ │ ├── RandomUserClient.java │ ├── RandomUserDTO.java │ └── seeding/ # Organized seeding logic │ ├── ApiUserService.java │ ├── ApiUserServiceImpl.java │ └── UserSeeder.java ├── persistence/ │ ├── daos/ │ │ ├── UserDAO.java │ │ ├── AssetDAO.java │ │ └── MaintenanceLogDAO.java │ └── interfaces/ │ ├── IDAO.java │ ├── IUserDAO.java │ ├── IAssetDAO.java │ └── IMaintenanceLogDAO.java ├── services/ │ ├── UserService.java │ ├── UserServiceImpl.java │ ├── AssetService.java │ ├── AssetServiceImpl.java │ ├── MaintenanceLogService.java │ └── MaintenanceLogServiceImpl.java └── utils/ ├── APIReader.java ├── CredentialsHandler.java └── PropertyReader.java # Renamed from Utils.java Key changes:\nHibernate config moved to app.config.hibernate (it\u0026rsquo;s application config, not persistence logic) Seeding logic organized under integration.seeding Utils.java renamed to PropertyReader.java (specific name, not a junk drawer) Consistent naming throughout (no more generic \u0026ldquo;Utils\u0026rdquo; or \u0026ldquo;Helper\u0026rdquo; classes) API Documentation # Base URL # http://localhost:7070/api/v1 Authentication # Not yet implemented. All endpoints are currently open.\nQuick Reference # Users: /users Assets: /assets Asset logs (create + list): /assets/{id}/logs Cross-asset logs (queries): /logs User Endpoints # HTTP Method Endpoint Notes Success Common Errors POST /users Create user 201 400, 409 GET /users List users 200 GET /users/{id} Get user by ID 200 404 PUT /users/{id} Update non-password fields 200 400, 404, 409 DELETE /users/{id} Deactivate user (soft delete) 204 404 PATCH /users/{id} Reactivate user 204 404 Example: Create user (request)\n{ \u0026#34;firstName\u0026#34;: \u0026#34;John\u0026#34;, \u0026#34;lastName\u0026#34;: \u0026#34;Doe\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;john.doe@example.com\u0026#34;, \u0026#34;phone\u0026#34;: \u0026#34;12345678\u0026#34;, \u0026#34;role\u0026#34;: \u0026#34;TECHNICIAN\u0026#34;, \u0026#34;password\u0026#34;: \u0026#34;securePassword123\u0026#34; } Example: User (response)\n{ \u0026#34;id\u0026#34;: 1, \u0026#34;firstName\u0026#34;: \u0026#34;John\u0026#34;, \u0026#34;lastName\u0026#34;: \u0026#34;Doe\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;john.doe@example.com\u0026#34;, \u0026#34;phone\u0026#34;: \u0026#34;12345678\u0026#34;, \u0026#34;role\u0026#34;: \u0026#34;TECHNICIAN\u0026#34;, \u0026#34;active\u0026#34;: true } Validation Rules:\nRequired on create: firstName, lastName, email, phone, role, password email must match ^[^@\\\\s]+@[^@\\\\s]+\\\\.[^@\\\\s]+$ role must be TECHNICIAN or MANAGER Password is hashed with BCrypt and never returned Asset Endpoints # HTTP Method Endpoint Notes Success Common Errors POST /assets Create asset (active optional) 201 400 GET /assets Optional filter: ?active=true/false 200 GET /assets/{id} Get asset by ID 200 404 PATCH /assets/{id} Activate asset 204 404 DELETE /assets/{id} Deactivate asset 204 404 Example: Create asset (request)\n{ \u0026#34;name\u0026#34;: \u0026#34;Hydraulic Press #3\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;Main production line hydraulic press\u0026#34;, \u0026#34;active\u0026#34;: true } Example: Asset (response)\n{ \u0026#34;id\u0026#34;: 1, \u0026#34;name\u0026#34;: \u0026#34;Hydraulic Press #3\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;Main production line hydraulic press\u0026#34;, \u0026#34;active\u0026#34;: true, \u0026#34;lastLogDate\u0026#34;: null } Query Parameters:\nactive (optional): true = active assets only, false = inactive only, omitted = all assets Maintenance Log Endpoints (Asset-Scoped) # HTTP Method Endpoint Notes Success Common Errors POST /assets/{id}/logs Create log for a specific asset 201 400, 404 GET /assets/{id}/logs Optional filters: ?taskType=..., ?status=... 200 400, 404 Example: Create log (request)\n{ \u0026#34;performedDate\u0026#34;: \u0026#34;2026-03-06T14:30:00\u0026#34;, \u0026#34;status\u0026#34;: \u0026#34;COMPLETED\u0026#34;, \u0026#34;taskType\u0026#34;: \u0026#34;MAINTENANCE\u0026#34;, \u0026#34;comment\u0026#34;: \u0026#34;Replaced hydraulic fluid\u0026#34;, \u0026#34;performedByUserId\u0026#34;: 1 } Example: Log (response)\n{ \u0026#34;id\u0026#34;: 1, \u0026#34;performedDate\u0026#34;: \u0026#34;2026-03-06T14:30:00\u0026#34;, \u0026#34;status\u0026#34;: \u0026#34;COMPLETED\u0026#34;, \u0026#34;taskType\u0026#34;: \u0026#34;MAINTENANCE\u0026#34;, \u0026#34;comment\u0026#34;: \u0026#34;Replaced hydraulic fluid\u0026#34;, \u0026#34;assetId\u0026#34;: 1, \u0026#34;assetName\u0026#34;: \u0026#34;Hydraulic Press #3\u0026#34;, \u0026#34;performedByUserId\u0026#34;: 1, \u0026#34;performedByName\u0026#34;: \u0026#34;John Doe\u0026#34; } Query Parameters:\ntaskType (optional): PRODUCTION, MAINTENANCE, or ERROR status (optional): PENDING, IN_PROGRESS, COMPLETED, or CANCELLED Maintenance Log Endpoints (Standalone) # HTTP Method Endpoint Notes Success Common Errors GET /logs Optional filter: ?status=... 200 400 GET /logs/{id} Get log by ID 200 404 GET /logs/user/{userId} Logs performed by a user 200 404 GET /logs/active-assets Optional: ?limit=10 200 Query Parameters:\nstatus (optional): Filter by PENDING, IN_PROGRESS, COMPLETED, or CANCELLED limit (optional): Maximum number of results (default: 10) Notes:\nLogs are immutable (no update/delete operations) Logs can only be created via /assets/{id}/logs Data Models # Enums # UserRole:\nTECHNICIAN MANAGER LogStatus:\nPENDING IN_PROGRESS COMPLETED CANCELLED TaskType:\nPRODUCTION MAINTENANCE ERROR Error Response Format # All errors follow this format:\n{ \u0026#34;status\u0026#34;: 404, \u0026#34;msg\u0026#34;: \u0026#34;User not found\u0026#34; } HTTP Status Codes:\n400 - Bad Request (validation failure, invalid input) 404 - Not Found (resource doesn\u0026rsquo;t exist) 409 - Conflict (constraint violation, e.g., duplicate email) 500 - Internal Server Error (database transaction/query failure) 503 - Service Unavailable (database connection failure) API Notes # DELETE deactivates (active=false), PATCH reactivates (active=true) and both return 204 No Content Passwords are only accepted inbound and never returned (hashed with BCrypt) Logs are immutable and always belong to an asset (created via /assets/{id}/logs) Assets have no general update endpoint (only active can change) Filtering is done with query parameters, not separate routes Lessons Learned # Technical skills:\nJavalin 7.0.1 configuration patterns and breaking changes RESTful API design (nested resources, query parameters, proper HTTP verbs) DTO strategy for security and separation of concerns Multi-layer validation (input vs. business rules) Centralized exception handling with logging Lazy loading pitfalls and solutions Design patterns:\nController → Service → DAO layering Dependency injection via constructor Configuration separation (AppConfig, DependencyContainer) Route organization with EndpointGroups Hybrid DTOs (minimal embedded data for relationships) Architecture principles:\nSingle Responsibility Principle (each class has one job) Dependency Inversion (depend on interfaces, not implementations) Separation of concerns (each layer has distinct responsibilities) DRY (exception handling in one place, not scattered) Debugging insights:\nThe lazy loading issue was a good reminder that layer boundaries matter. The DTO constructor runs after the DAO returns, which often means the persistence context/session has already been closed. Any lazy collection access at that point fails.\nThe solution: keep entity relationship navigation in the service layer while you\u0026rsquo;re still in an active persistence context, and keep DTOs as simple data containers.\nWhat\u0026rsquo;s Next # The REST API layer is complete for a minimal viable product and has the parts I want for now. The entire application now has:\n✅ Solid persistence layer with exception handling ✅ External API integration with concurrent processing ✅ Complete REST API with proper layering ✅ Input and (limited) business validation ✅ Centralized exception handling and logging ✅ Clean, maintainable architecture Next steps:\nAPI endpoint testing (Rest Assured + Hamcrest) Authentication and authorization (JWT tokens?) Swagger/OpenAPI documentation generation Deployment considerations Maintenance Log - Weekly Summary (Updated) # Updates Since Last Summary # Current state: REST API layer is complete on top of the existing persistence + integration work. Routing, DTOs, validation, exception handling, and logging are now in place with a cleaner app entrypoint/config.\nNew / changed this week:\nJavalin 7.0.1 routing moved fully into config.routes using EndpointGroup API versioning baked in from day one (/api/v1/...) Routes split per entity and aggregated (User / Asset / MaintenanceLog) DTOs split for Users (request vs response) to keep passwords out of responses Logs exposed both asset-scoped and standalone (nested resources + cross-asset queries) Validation split into controller input checks + service business rules Centralized exception mapping + Logback logging (consistent API errors + better diagnostics) Main refactored into Main (entry), AppConfig (Javalin), and DependencyContainer (wiring) Still true (carried forward):\nMaintenance logs are immutable (audit trail) Users/assets are soft-deleted via active Default to LAZY loading; service layer handles any relationship navigation DAOs are persistence-focused; services own business rules That\u0026rsquo;s it for this week — next up is testing the API endpoints with Rest Assured + Hamcrest.\n","date":"9 March 2026","externalUrl":null,"permalink":"/Portfolio/devlog/maintenancelog-fourthweek/","section":"Devlogs","summary":"Devlog Week 4: Building a Production-Style REST API # Welcome back! This week was all about taking the solid persistence layer and API integration work from previous weeks and exposing it all through a proper REST API. The goal: build a complete, well-architected HTTP interface using Javalin 7.0.1 that follows REST principles and industry best practices.\n","title":"Maintenance Log - Fourth Week: Building a Production-Style REST API","type":"devlog"},{"content":"","date":"9 March 2026","externalUrl":null,"permalink":"/Portfolio/tags/rest/","section":"Tags","summary":"","title":"REST","type":"tags"},{"content":"","date":"26 February 2026","externalUrl":null,"permalink":"/Portfolio/tags/api/","section":"Tags","summary":"","title":"API","type":"tags"},{"content":"","date":"26 February 2026","externalUrl":null,"permalink":"/Portfolio/tags/concurrency/","section":"Tags","summary":"","title":"Concurrency","type":"tags"},{"content":"","date":"26 February 2026","externalUrl":null,"permalink":"/Portfolio/tags/externalapi/","section":"Tags","summary":"","title":"ExternalAPI","type":"tags"},{"content":" Devlog Week 3: External API Integration \u0026amp; Concurrent Processing # Alright, welcome back! After a little bit of a break due to unforeseen circumstances, I\u0026rsquo;ve now continued my work on this little project of mine. The goal this week was to integrate and fetch some data from an external API and use it somehow — so, without further ado:\nChoosing the Right API for Testing # Choosing an API was actually the first hurdle I had to overcome. The core issue was that from a design standpoint (and for the main use case), I didn\u0026rsquo;t actually need any external information — I just wanted realistic seed data.\nSo I looked around a bit and found this nifty little API that can generate random user information. The goal was to have data I could enter and use in my database instead of generic \u0026ldquo;User1\u0026rdquo;, \u0026ldquo;User2\u0026rdquo;, etc., but also to not have to hand-write all these guys myself. The added bonus is that I can make the output deterministic by using a fixed seed, which is perfect for testing.\nAnd yes: it also gives me users I can log in as later by digging out the passwords before they get hashed (obviously only for testing — not real production).\nKey decision:\nAPI: RandomUser.me Purpose: Generate realistic test users for development Benefit: Known passwords (via fixed seed) for testing authentication later The Fixed Seed Trade-off # After settling on RandomUser.me, I started playing around with the endpoint configuration. The API is actually pretty flexible — you can specify which fields you want, which nationalities, and even use a seed for deterministic results.\nI ended up with this:\n\u0026#34;https://randomuser.me/api/?results=%d\u0026amp;nat=gb,dk\u0026amp;inc=name,login,email,phone\u0026amp;seed=myfixedseed123\u0026#34; Breaking this down:\nnat=gb,dk — Only British and Danish users (seemed more realistic for a Denmark-based system than getting users from everywhere) inc=name,login,email,phone — Only return the fields I actually need, so I don\u0026rsquo;t have to ignore a ton of JSON seed=myfixedseed123 — The big one: this makes the API return the same users every time The seed parameter was great for predictability, but it created an interesting problem I didn\u0026rsquo;t anticipate. When I tried running my multi-threaded fetch (more on that later), I got duplicate key violations in the database. Took me a minute to realize what was happening: all my threads were calling the API with the same seed, so they were all getting back the same 5 users!\nThe solution: I ended up creating two endpoints — one with the fixed seed for actual seeding (predictable, known data), and one without for demonstrating the multi-threading speedup (random data, no duplicates).\nBuilding a Generic API Reader # One thing I wanted to avoid was writing API-specific HTTP and JSON handling code every time I needed to call an external service. So I built a generic APIReader class that could handle any JSON API, not just RandomUser.\nThe idea was: generic HTTP fetching + JSON parsing in one place, then specific client classes (like RandomUserClient) that know about the API\u0026rsquo;s structure and endpoints.\nThis also meant I had to think about exception handling. My first pass was pretty lazy:\ncatch (IOException | URISyntaxException e) { throw new IllegalArgumentException(\u0026#34;Could not retrieve data from the provided URL. Try again later\u0026#34;); } This loses all context — you don\u0026rsquo;t know what URL failed, why it failed, or whether retrying even makes sense.\nAfter some back-and-forth, I landed on this approach:\ncatch (URISyntaxException e) { String safeUrl = url.split(\u0026#34;\\\\?\u0026#34;)[0]; // Redact query params (API keys, etc.) throw new IllegalArgumentException(\u0026#34;Invalid URL: \u0026#34; + safeUrl + \u0026#34; error: \u0026#34; + e.getMessage(), e); } catch (IOException e) { String safeUrl = url.split(\u0026#34;\\\\?\u0026#34;)[0]; throw new RuntimeException(\u0026#34;API call failed for \u0026#34; + safeUrl + \u0026#34;: \u0026#34; + e.getMessage(), e); } Why separate the exceptions?\nURISyntaxException = programming error (malformed URL string) IOException = runtime error (network failure, bad JSON, etc.) Different problems, different handling. And by chaining the original exception (e), I preserve the full stack trace for debugging.\nDTO Pattern \u0026amp; Nested JSON Mapping # The RandomUser API returns nested JSON, which meant I needed to map it properly. The structure looks like this:\n{ \u0026#34;name\u0026#34;: { \u0026#34;first\u0026#34;: \u0026#34;John\u0026#34;, \u0026#34;last\u0026#34;: \u0026#34;Doe\u0026#34; }, \u0026#34;login\u0026#34;: { \u0026#34;password\u0026#34;: \u0026#34;secret\u0026#34; }, \u0026#34;email\u0026#34;: \u0026#34;john@example.com\u0026#34;, \u0026#34;phone\u0026#34;: \u0026#34;123-456-7890\u0026#34; } My first instinct was to flatten this in the DTO using @JsonProperty:\n@JsonProperty(\u0026#34;name.first\u0026#34;) private String firstName; Turns out Jackson doesn\u0026rsquo;t support dot notation like that. It just looks for a field literally called \u0026quot;name.first\u0026quot; at the root level.\nThe actual solution: Mirror the JSON structure with nested records:\n@JsonProperty(\u0026#34;name\u0026#34;) private Name name; @JsonProperty(\u0026#34;login\u0026#34;) private Login login; public record Name(String first, String last) {} public record Login(String password) {} Then when I need the data in the service layer:\ndto.getName().first() dto.getLogin().password() One gotcha I hit: I initially made the records private, which meant I couldn\u0026rsquo;t access them from outside the DTO class. Had to make them public for the service layer to use them.\nExecutorService: When Concurrency Actually Matters # This was probably the most interesting part of the week — demonstrating actual, measurable speedup from multi-threading.\nThe setup was simple:\nSequential: Make 5 API calls one after another, each fetching 10 users Concurrent: Make 5 API calls in parallel using ExecutorService, each fetching 10 users I timed both approaches:\nSequential (5x10): 1447ms Concurrent (5 threads, 50 total): 216ms Speedup: 6.7x The concurrent version was almost 7 times faster. The reason is that network I/O is the bottleneck here. When you make a request, most of the time is spent waiting for the server to respond. With multi-threading, all 5 threads can be waiting simultaneously instead of one at a time.\nException handling in concurrent code:\nOne thing I had to think through was: what happens if one of the API calls fails? Should the whole seeding operation fail, or should I just skip that batch and continue with the others?\nI went with the latter:\nfor (Future\u0026lt;List\u0026lt;RandomUserDTO\u0026gt;\u0026gt; f : futures) { try { List\u0026lt;RandomUserDTO\u0026gt; userDTOList = f.get(); users.addAll(userDTOList); } catch (ExecutionException e) { System.out.println(\u0026#34;Batch failed: \u0026#34; + e.getCause().getMessage()); // Log and continue — don\u0026#39;t stop the whole process } } small note: for the moment this is just a print statement, when i set up some logging that will change, but since this is a once only in non-production operation, this works for the moment.\nThis way, if one batch fails (network hiccup, bad JSON, whatever), I still get the other 4 batches worth of users. For a one-time seeding operation, partial success is better than complete failure.\nService Layer: Where Business Logic Lives # With the API client done, I needed a service layer to handle the actual business logic: converting DTOs to entities, assigning roles, hashing passwords, and persisting everything.\nI kept the DAO layer \u0026ldquo;dumb\u0026rdquo; — it just handles database operations. All the logic lives in the service:\nResponsibilities:\nDTO → Entity conversion Role assignment (every 5th user = MANAGER, rest = TECHNICIAN) Password hashing with BCrypt Orchestrating the whole seeding process One design decision I revisited: when to hash passwords\nInitially, I had the flow like this:\nConvert DTOs to User entities (with plain text passwords) Loop through and hash all passwords Persist But this meant plain text passwords were sitting in memory between steps 1 and 2, which felt wrong. I refactored to hash immediately during conversion:\nUser.builder() .firstName(dto.getName().first()) .lastName(dto.getName().last()) .email(dto.getEmail()) .password(hashPassword(dto.getLogin().password())) // Hash right away .active(true) .build() This way, plain text passwords never even make it into a User entity. They\u0026rsquo;re hashed the moment they\u0026rsquo;re pulled from the DTO.\nDependency Injection \u0026amp; Interface Usage # I hit a small compilation error that turned into a good learning moment about dependency injection.\nI had declared my DAO as an interface in Main:\nIDAO\u0026lt;User\u0026gt; userDao = new UserDAO(emf); But my service was expecting the concrete class:\npublic class ApiUserServiceImpl { private final UserDAO userDao; // Concrete class } This failed to compile when I tried to pass the interface-typed variable to the service.\nThe fix was simple: make the service depend on the abstraction, not the implementation:\npublic class ApiUserServiceImpl { private final IDAO\u0026lt;User\u0026gt; userDao; // Interface } Why this matters:\nI can now inject a mock DAO for testing without changing the service The service doesn\u0026rsquo;t care about UserDAO\u0026rsquo;s specific implementation details This is basically the Dependency Inversion Principle in action Small change, but it makes the code more flexible and testable.\nSecurity: BCrypt Password Hashing # One thing I definitely didn\u0026rsquo;t want was plain text passwords in the database. Even for test data, it\u0026rsquo;s bad practice to get into the habit of storing passwords unhashed.\nI used BCrypt, which handles salting automatically:\nprivate String hashPassword(String password) { return BCrypt.hashpw(password, BCrypt.gensalt()); } Each password gets a unique salt generated by BCrypt.gensalt(), which means even if two users have the same password, the hashes will be different.\nThe benefit of using a fixed seed for the API:\nI know the plain text passwords (they\u0026rsquo;re in the API response) They\u0026rsquo;re hashed in the database When I implement authentication later, I can test it with known credentials So I get realistic password security while still having predictable test data. Win-win.\nLessons Learned # Technical skills:\nExecutorService and concurrent programming (and seeing real, measurable speedup) Jackson JSON mapping, especially with nested structures Exception handling strategies: preserve context, chain causes, handle different error types differently BCrypt password hashing Design patterns:\nDTO pattern for API integration (separating external data structure from internal entities) Dependency injection with interfaces (program to abstractions, not implementations) Separation of concerns across layers (DAO = persistence, Service = business logic) Debugging insights:\nThe duplicate user issue was a good lesson in understanding how tools work. I assumed \u0026ldquo;concurrent API calls\u0026rdquo; + \u0026ldquo;fixed seed\u0026rdquo; would work fine, but I didn\u0026rsquo;t think through that the seed makes the API deterministic — same seed + same parameters = same results. All my threads were fetching identical users.\nOnce I understood that, the fix was obvious: either use different seeds/pages per thread, or just don\u0026rsquo;t use multi-threading with a fixed seed. I went with the latter for actual seeding (single-threaded, predictable) and kept multi-threading for the performance demo (random data).\nWhat\u0026rsquo;s Next # The persistence layer is solid, I\u0026rsquo;ve got test data, and I\u0026rsquo;ve demonstrated some concurrency skills. Next up is probably the REST API layer with Javalin — time to actually expose this thing over HTTP and start thinking about authentication and authorization.\nAppendix: Current folder structure # Just for quick context (and so this post has somewhere to point when I mention packages), here\u0026rsquo;s the trimmed version of the project structure. I\u0026rsquo;m only including the stuff that\u0026rsquo;s relevant to this week\u0026rsquo;s work (API integration + services + persistence).\nsrc/ └─ main/ ├─ java/ │ └─ app/ │ ├─ Main.java │ ├─ entities/ │ │ └─ model/ (Asset, MaintenanceLog, User) │ ├─ exceptions/ (ApiException, DatabaseException, DatabaseErrorType) │ ├─ integration/ │ │ ├─ client/ (RandomUserClient) │ │ ├─ dto/ (RandomUserDTO) │ │ └─ util/ (APIReader) │ ├─ persistence/ (Hibernate config + DAOs) │ └─ services/ (ApiUserService + implementation) └─ resources/ (config.properties, logback.xml) Maintenance Log - Design Decisions (Updated) # Updates Since Last Summary # Current state: Persistence layer is solid, DAOs/services are in place, and I\u0026rsquo;ve now got a repeatable way to seed realistic test users (including a concurrency demo).\nCarried forward (still true) # Before the new API stuff (skip to #16 for those), these are still the rules of the world:\nMaintenance logs are immutable (audit trail) Users are soft-deleted (active boolean) Assets are immutable except for active Relationships are intentional (only bidirectional where navigation is actually used) Default to LAZY loading; pull related data explicitly when needed DAOs are “dumb persistence”; services contain business logic Persistence errors are wrapped in custom exceptions (DatabaseException + error type) TaskType is an enum (not an entity) Logs come back newest-first (@OrderBy(\u0026quot;performedDate DESC\u0026quot;)) Major Design Changes (This Week) # 16. API Integration Architecture # Decision: Generic APIReader + specific client classes Implementation: APIReader handles HTTP/JSON, RandomUserClient handles RandomUser-specific logic Rationale: Reusability — can add other API integrations (weather, stock prices, etc.) without duplicating HTTP/JSON code Exception handling: Separate URISyntaxException (programming error) from IOException (runtime error), preserve original exception with cause chaining 17. Fixed Seed for Test Data # Decision: Use fixed seed for seeding, random for demonstrations Implementation: Two endpoint configurations — fixed (seed=myfixedseed123) and random Rationale: Predictable test data with known passwords for authentication testing Trade-off discovered: Fixed seed + multi-threading = duplicate users (all threads get same results) Solution: Use single-threaded for actual seeding, multi-threaded for performance demo with random data 18. Password Security # Decision: Hash passwords immediately during entity creation (not after) Implementation: BCrypt with auto-generated salt, hashing happens in DTO → Entity conversion Rationale: Never store plain text passwords in memory or database, even temporarily Password field: Added password field to User entity for future authentication features Testing benefit: Fixed seed means known plain text passwords for testing authentication flow 19. Concurrent API Fetching # Decision: Support both single and multi-threaded fetching via configuration Implementation: ExecutorService with configurable thread count, invokeAll() for batch submission Performance: ~6.7x speedup (1447ms sequential vs 216ms concurrent for 50 users) Exception handling: Individual batch failures logged and skipped, partial results returned Rationale: Demonstrates concurrency benefits, graceful degradation on partial failure 20. Service Layer Responsibilities # Decision: Service layer handles all business logic, DAO remains \u0026ldquo;dumb persistence\u0026rdquo; Implementation: DTO → Entity conversion Role assignment logic (every 5th user = MANAGER, rest = TECHNICIAN) Password hashing (BCrypt) Orchestration of seeding process Check for existing users before seeding DAO layer: Only CRUD operations, no business rules Rationale: Clear separation of concerns, testable business logic, DAO can be swapped without affecting logic 21. DTO Structure for Nested JSON # Decision: Use nested records that mirror API response structure Implementation: Inner record Name(String first, String last) and record Login(String password) Attempted approach that failed: @JsonProperty(\u0026quot;name.first\u0026quot;) — Jackson doesn\u0026rsquo;t support dot notation Correct approach: Nested structures with proper Jackson mapping Visibility: Records must be public (not private) for access from service layer That\u0026rsquo;s it for this week\u0026rsquo;s progress. Solid chunk of work, and I\u0026rsquo;m actually pretty happy with how clean the integration turned out. Next time: making this thing actually respond to HTTP requests.\n","date":"26 February 2026","externalUrl":null,"permalink":"/Portfolio/devlog/maintenancelog-thirdweek/","section":"Devlogs","summary":"Devlog Week 3: External API Integration \u0026 Concurrent Processing # Alright, welcome back! After a little bit of a break due to unforeseen circumstances, I’ve now continued my work on this little project of mine. The goal this week was to integrate and fetch some data from an external API and use it somehow — so, without further ado:\n","title":"Maintenance Log - Third Week: External API Integration \u0026 Concurrent Processing","type":"devlog"},{"content":"","date":"26 February 2026","externalUrl":null,"permalink":"/Portfolio/tags/seeding/","section":"Tags","summary":"","title":"Seeding","type":"tags"},{"content":"","date":"13 February 2026","externalUrl":null,"permalink":"/Portfolio/tags/dao/","section":"Tags","summary":"","title":"DAO","type":"tags"},{"content":"","date":"13 February 2026","externalUrl":null,"permalink":"/Portfolio/tags/hibernate/","section":"Tags","summary":"","title":"Hibernate","type":"tags"},{"content":"","date":"13 February 2026","externalUrl":null,"permalink":"/Portfolio/tags/jpa/","section":"Tags","summary":"","title":"JPA","type":"tags"},{"content":" Devlog Week 2: Relations, DAOs \u0026amp; Exception Handling # So, welcome to this second entry of my Devlog. Without further ado, let\u0026rsquo;s continue into this week\u0026rsquo;s additions.\nRelations, DAOs and exception handling # This week\u0026rsquo;s primary goal was to get the necessary relations between my entities up and running, begin to finalize the DAOs for each, and integrate interfaces both for simple CRUD and for specific queries across the board. After that, the project looked a bit like this:\nAfter that I began to add exception handling across my different CRUD operations. I made the design choice of adding my own custom DatabaseException class with some error types I can relate to common HTTP error codes, both to keep low coupling between layers and to have the correlation I need when I start to use APIException in the service layer to interpret DatabaseExceptions. For reference, I have thrown in my current custom exception, the error type enum and an example of how it\u0026rsquo;s used in the code:\npublic enum DatabaseErrorType { CONSTRAINT_VIOLATION , // 409 NOT_FOUND, // 404 CONNECTION_FAILURE, // 503 TRANSACTION_FAILURE, // 500 QUERY_FAILURE, // 500 UNKNOWN } public class DatabaseException extends RuntimeException { private final DatabaseErrorType errorType; public DatabaseException(String message, DatabaseErrorType errorType) { super(message); this.errorType = errorType; } public DatabaseException(String message, DatabaseErrorType errorType, Throwable cause) { super(message, cause); this.errorType = errorType; } public DatabaseErrorType getErrorType() { return errorType; } } @Override public User update(User u) { if (u == null || u.getUserId() == null) { throw new IllegalArgumentException(\u0026#34;User and user id are required\u0026#34;); } try (EntityManager em = emf.createEntityManager()) { em.getTransaction().begin(); try { User merged = em.merge(u); em.getTransaction().commit(); return merged; } catch (IllegalArgumentException e) { if (em.getTransaction().isActive()) { em.getTransaction().rollback(); } throw new DatabaseException(\u0026#34;User not found or invalid\u0026#34;, DatabaseErrorType.NOT_FOUND, e); } catch (PersistenceException e) { if (em.getTransaction().isActive()) { em.getTransaction().rollback(); } throw new DatabaseException(\u0026#34;Update User failed\u0026#34;, DatabaseErrorType.TRANSACTION_FAILURE, e); } catch (RuntimeException e) { if (em.getTransaction().isActive()) { em.getTransaction().rollback(); } throw new DatabaseException(\u0026#34;Update User failed\u0026#34;, DatabaseErrorType.UNKNOWN, e); } } } Who needs to know about whom? # One of the more interesting design decisions I had while working on the project this week was about relationships — specifically, which entities in my domain model need to know about each other, and which ones don\u0026rsquo;t.\nIt sounds obvious when you say it out loud, but it\u0026rsquo;s one of those things that\u0026rsquo;s easy to just\u0026hellip; not think about, and then end up with a bloated model where everything points to everything else for no good reason.\nAsset ↔ MaintenanceLog (Bidirectional)\nThis one was a clear yes for bidirectional. An asset needs to know about its logs because the whole point of the system is reviewing an asset\u0026rsquo;s maintenance history. You land on an asset, and you expect to see its logs right there.\n// Asset side @OneToMany(fetch = FetchType.LAZY, mappedBy = \u0026#34;asset\u0026#34;) @OrderBy(\u0026#34;performedDate DESC\u0026#34;) private List\u0026lt;MaintenanceLog\u0026gt; logs = new ArrayList\u0026lt;\u0026gt;(); // MaintenanceLog side (owning side) @ManyToOne(fetch = FetchType.LAZY, optional = false) @JoinColumn(name = \u0026#34;asset_id\u0026#34;, nullable = false) Asset asset; The MaintenanceLog owns the relationship (it holds the foreign key), but Asset can still navigate to its logs. The @OrderBy annotation means they always come back newest first, without me having to sort anything manually.\nMaintenanceLog → User (Unidirectional)\nThis one was a deliberate choice to not go bidirectional. A log needs to know who performed it — that\u0026rsquo;s just part of the audit trail. But does a User need a list of all the logs they\u0026rsquo;ve ever written?\nIn practice, no — at least not in the way this system works. If you want logs by a specific user, you go through the MaintenanceLogDAO and query directly. You don\u0026rsquo;t navigate there through the User entity. Keeping it unidirectional keeps User clean and focused.\n// MaintenanceLog side only — User has no collection of logs @ManyToOne(fetch = FetchType.LAZY, optional = false) @JoinColumn(name = \u0026#34;performed_by_user_id\u0026#34;, nullable = false) User performedBy; The question I found most useful to ask myself throughout all of this was: \u0026ldquo;Will I ever need to navigate this relationship from the other direction in a realistic use case?\u0026rdquo;\nNot \u0026ldquo;could I imagine a scenario where\u0026hellip;\u0026rdquo;, but actually, in the flow of this application, does it make sense? For assets and logs: yes, absolutely. For users and logs: no, you query for that directly.\nIt\u0026rsquo;s a small thing, but keeping relationships unidirectional where possible means less to maintain, less risk of accidentally triggering lazy loading where you don\u0026rsquo;t want it, and a domain model that actually reflects how the system gets used — rather than just how it could be used in some hypothetical future.\nAnyway, small win for thinking before just annotating everything with @OneToMany and calling it a day.\nAnd finally a quick rundown of my current design decisions:\nMaintenance Log - Design Decisions (Updated) # Updates Since Last Summary # Current state: DAO layer mostly complete will probably need to make more specific queries later when the need arises, integration tests planned next.\nMajor Design Changes # 1. Asset Fetch Strategy Changed Back to LAZY\nPrevious: FetchType.EAGER on Asset.logs Current: FetchType.LAZY on Asset.logs Why changed back: EAGER would load all logs for all assets in list queries (performance concern) Solution: Use explicit queries when logs are needed, rely on lazy loading otherwise 2. Added Helper Method to Asset\nImplementation: Asset.addLog(MaintenanceLog log) helper method added Previous decision: No helper methods needed for stateless REST API Why changed: Provides convenient way to maintain bidirectional relationship consistency Usage: Optional convenience method, not required for persistence 3. Asset.logs Collection Initialized\nImplementation: private List\u0026lt;MaintenanceLog\u0026gt; logs = new ArrayList\u0026lt;\u0026gt;() Rationale: Prevents NullPointerException when using addLog() helper method 4. Added @OrderBy to Asset.logs\nImplementation: @OrderBy(\u0026quot;performedDate DESC\u0026quot;) Rationale: Logs always returned in chronological order (newest first) without manual sorting 5. Asset Entity Now Has Selective Mutability\nImplementation: Only active field has @Setter, other fields immutable after creation\nRationale:\nname and description shouldn\u0026rsquo;t change (audit trail) active status needs to change (deactivation workflow) ID never changes (automatically generated) 6. Comprehensive Exception Handling Added to DAO Layer\nImplementation: All DAO methods now wrap JPA exceptions in custom DatabaseException\nPattern:\nInput validation throws IllegalArgumentException (e.g., null checks) Read failures throw DatabaseException with QUERY_FAILURE error type Write failures throw DatabaseException with TRANSACTION_FAILURE error type \u0026ldquo;Not found\u0026rdquo; scenarios throw DatabaseException with NOT_FOUND error type Transaction safety: All write operations include proper rollback on any exception\nRationale:\nDecouples persistence layer from JPA implementation details Provides consistent exception interface for service layer Maps cleanly to HTTP status codes without exposing persistence concerns Distinguishes between different failure types for better error handling DAO Method Additions: # MaintenanceLogDAO:\ngetByPerformedUser(Integer userId) — cross-user log queries getLogsOnActiveAssets(int limit) — filtered + paginated queries UserDAO:\ngetActiveUsers(int limit) — paginated active users query AssetDAO:\nsetActive(Integer id, boolean active) — only allowed mutation getInactiveAssets() — query deactivated assets Structural Changes: # Removed Task entity entirely Added TaskType enum Reorganized project structure into separate packages Added DAO interfaces layer Standardized exception handling across all DAOs Removed unnecessary new ArrayList\u0026lt;\u0026gt;() wrapping in DAO return statements 1. Immutability of Maintenance Logs # Decision: MaintenanceLog entries are never updated or deleted Implementation: MaintenanceLogDAO has NO update() method — throws UnsupportedOperationException Rationale: Data integrity and traceability (GDPR compliance, audit trail) Future consideration: Log corrections will reference previous entries (handled in service/GUI layer) 2. Soft Delete for Users # Decision: Users are deactivated, not deleted (active boolean field) Implementation: User.active field added No delete() method in UserDAO update() method used to set active = false Rationale: Preserve historical data — maintenance logs need to show who performed them, even after users leave 3. Immutability of Assets (with Exception) # Decision: Assets are immutable except for active status Implementation: Only active field has @Setter on entity name and description have no setters (immutable) AssetDAO.update() throws UnsupportedOperationException AssetDAO.setActive(Integer id, boolean active) allows only status changes Uses find() + setter pattern, not merge() Rationale: Asset details (name, description) should not change for audit trail Active status needs to change for operational/deactivation workflow Maintains data integrity while allowing necessary state management 4. Task System Redesigned as Enum # Decision: Removed Task entity, replaced with TaskType enum Implementation: public enum TaskType { PRODUCTION, MAINTENANCE, ERROR } Previous design: Task entity with title and description Why changed: Task descriptions are unique per log (technician writes what they did) Only the category (title) is predefined Enum is simpler, type-safe, and matches actual use case Specific work details go in MaintenanceLog.comment field (now nullable = false) Rationale: Eliminates unnecessary entity and relationship, clearer domain model 5. Entity Relationships # MaintenanceLog relationships: # @ManyToOne to Asset (owning side, LAZY loading) @ManyToOne to User (owning side, LAZY loading) All relationships: nullable = false Changed: Removed @ManyToOne to Task (now uses TaskType enum) Asset relationship: # @OneToMany to MaintenanceLog (non-owning side) mappedBy = \u0026quot;asset\u0026quot; LAZY loading @OrderBy(\u0026quot;performedDate DESC\u0026quot;) — automatic chronological sorting Initialized to empty ArrayList\u0026lt;\u0026gt;() to prevent NullPointerException Optional helper method: addLog(MaintenanceLog log) for bidirectional consistency No setter on collection (relationship managed via MaintenanceLog creation or helper method) User relationship: # Unidirectional from MaintenanceLog to User User entity has no collection of logs Rationale: Logs are accessed via Asset or direct queries, not via User 6. Helper Methods on Entities # Decision: Asset.addLog(MaintenanceLog log) helper method added Previous decision: No helper methods needed Why changed: Provides convenient way to maintain bidirectional consistency if needed Usage: Optional — not required for persistence, can still manage via MaintenanceLog side only Implementation: Sets both sides of relationship (logs.add(log) and log.setAsset(this)) 7. DAO Layer Responsibilities and Architecture # DAOs are \u0026ldquo;dumb persistence\u0026rdquo; — no business logic # Responsibilities:\nCRUD operations Database queries via JPQL Exception wrapping (convert JPA exceptions to DatabaseException) Input validation (IllegalArgumentException for null/invalid inputs) Not responsible for:\nValidation (service layer) Business rules (service layer) HTTP concerns (controller layer) DAO Interfaces # Implementation: Separate interfaces (IDAO\u0026lt;T\u0026gt;, IUserDAO, etc.) for each DAO Rationale: Contract separation for testing Interface Segregation Principle Allows mocking in future tests 8. Exception Handling Strategy # Custom Exception Hierarchy # public class DatabaseException extends RuntimeException { private final DatabaseErrorType errorType; } public enum DatabaseErrorType { CONSTRAINT_VIOLATION, // 409 NOT_FOUND, // 404 CONNECTION_FAILURE, // 503 TRANSACTION_FAILURE, // 500 QUERY_FAILURE, // 500 UNKNOWN } Exception Rules # \u0026ldquo;Not found\u0026rdquo; throws exceptions (not returns null) Rationale: In this system, looking up by ID is expected to succeed (IDs from database) Failed lookup indicates something went wrong (deleted, corrupted) Service layer catches and converts to appropriate HTTP responses Read operations (SELECT) → DatabaseErrorType.QUERY_FAILURE Write operations (INSERT, UPDATE) → DatabaseErrorType.TRANSACTION_FAILURE Rationale: Clear distinction helps service layer understand failure type Input validation → IllegalArgumentException (not DatabaseException) Transaction Management # Write operations: Explicit transaction with proper rollback on all exception types Read operations: No transaction needed Pattern: em.getTransaction().begin(); try { // operation em.getTransaction().commit(); } catch (PersistenceException e) { if (em.getTransaction().isActive()) { em.getTransaction().rollback(); } throw new DatabaseException(..., TRANSACTION_FAILURE, e); } 9. Query Strategy # Primary key lookups: Use em.find() (simpler, cached, returns null) Then check null and throw DatabaseException with NOT_FOUND Other queries: Use JPQL with TypedQuery and named parameters getSingleResult() handling: Catch NoResultException, rethrow as DatabaseException No unnecessary wrapping: Return query.getResultList() directly (removed new ArrayList\u0026lt;\u0026gt;() wrapper) 10. User Authentication Considerations # Decision: UserDAO.getByEmail() filters by active = true Rationale: Only active users can log in But: UserDAO.get(userId) returns all users (active or not) Rationale: Historical lookups need to show inactive users (who performed a log) 11. Field Protection # Decision: Manual @Setter annotations on individual fields Primary keys: NO setter (immutable after creation) Immutable entities (Asset): Only active field has setter Mutable entities (User): All fields except ID have setters Rationale: Prevents accidental ID modification and enforces immutability where needed 12. Email Uniqueness # Decision: Database-level constraint + service-layer validation Implementation: @Column(unique = true) on User.email Rationale: Defense in depth — database prevents duplicates, service layer provides better error messages 13. Merge vs. Find Pattern for Updates # Decision: Use find() + setter, not merge() with partial entities Implementation: Asset asset = em.find(Asset.class, id); asset.setActive(active); // commit auto-flushes Rationale: Prevents accidentally clearing other fields More explicit about what\u0026rsquo;s being changed No need for defensive find-before-merge Clearer intent 14. Fetch Strategy # Asset.logs: LAZY loading with @OrderBy(\u0026quot;performedDate DESC\u0026quot;) Rationale: Prevents loading all logs when listing assets Ordering: Always returns logs newest-first without manual sorting Access: Explicit queries or navigation through relationship when needed All MaintenanceLog relationships: LAZY loading 15. GetAll() Methods # Decision: Included in all DAOs despite potential performance issues Rationale: School project requirement to demonstrate functionality Implementation: AssetDAO.getAll() only returns active assets Some methods have limit parameter for pagination (e.g., getActiveUsers(int limit)) Note: Should be paginated for production use Architecture Layers (Planned) # ┌─────────────────────────────────┐ │ REST API (Javalin) │ ← HTTP status codes ├─────────────────────────────────┤ │ Service Layer │ ← Business rules, validation ├─────────────────────────────────┤ │ DAO Layer (Current) │ ← Database operations ├─────────────────────────────────┤ │ JPA/Hibernate │ ├─────────────────────────────────┤ │ PostgreSQL │ └─────────────────────────────────┘ ","date":"13 February 2026","externalUrl":null,"permalink":"/Portfolio/devlog/maintenancelog-secondweek/","section":"Devlogs","summary":"Devlog Week 2: Relations, DAOs \u0026 Exception Handling # So, welcome to this second entry of my Devlog. Without further ado, let’s continue into this week’s additions.\nRelations, DAOs and exception handling # This week’s primary goal was to get the necessary relations between my entities up and running, begin to finalize the DAOs for each, and integrate interfaces both for simple CRUD and for specific queries across the board. After that, the project looked a bit like this:\n","title":"Maintenance Log - Second Week: Relations, DAOs \u0026 Exception Handling","type":"devlog"},{"content":"","date":"13 February 2026","externalUrl":null,"permalink":"/Portfolio/tags/persistence/","section":"Tags","summary":"","title":"Persistence","type":"tags"},{"content":"","date":"6 February 2026","externalUrl":null,"permalink":"/Portfolio/tags/ai/","section":"Tags","summary":"","title":"AI","type":"tags"},{"content":"Alrighty, let’s try to actually get a real “first” blog post going for this site.\nWhat I use AI for # For today’s topic I wanted to talk a bit about my personal use of AI, briefly share my opinion on the doomsday talk about it taking our jobs, and explain how I use it day to day as a new programmer in a field where everything apparently is changing just as I get into it.\nWhen I’m perusing Reddit and various other online forums, or just hearing non‑developers talk, it sometimes seems like in a year none of us will be writing code. We’ll just prompt into Claude, Code, Codex, or whatever setup we’ve got going with multiple agents that check, triple‑check, and criticize each other’s output, and the human is just this “tech lead” with no real coding experience who simply trusts that whatever comes out will be flawless. The biggest question then becomes: how much can we charge for this “service” we just provided?\nWhy I’m not panicking about AI # I may be naive, but even with my limited time in software development I think that’s a bit of an overreaction — a feather that turned into ten chickens, so to speak (a very small thing blown way out of proportion). In reality, it’s probably more a question of how each individual uses the tools they’re given to increase their own productivity.\nWe may of course end up in a situation where we’re all the tech leads of our own little teams of AI agents, but I still believe that to use a tool well, you need to know both its use cases and its weaknesses, and use it as intended. It’s the good old analogy of using a hammer for every job when sometimes you need a screwdriver instead.\nUsing AI as a student # After that little rant, let’s get into how I’ve decided to use it for the moment. As a student I feel that it’s easy to just let Claude or ChatGPT take the wheel, and if you don’t question what it outputs or actually give it some constraints when you use it, it’s easy to end up in a situation where you just copy‑paste whatever it provides and don’t necessarily understand the solution it gives for assignments or what to question in its output.\nI’ve seen this just by watching now‑former fellow students who quickly gave into the temptation of quick solutions and ended up not getting the basic understanding needed to actually code something by themselves, instead of just relying blindly on an agent’s output. And I understand that it’s an easy temptation, especially if you don’t turn off things like:\nauto‑completion AI assistants integrated directly in your IDE of choice Why I mainly use Claude # I mainly use Claude for coding. Why? Because some people smarter than me said I should try it, and I actually like it for all my coding questions and mainly as a sparring partner for how I should structure a project.\nI started by setting up some personal references in my Claude settings on the web version (the one I use the most, just using it as a sparring partner), restricting it so it doesn’t give me direct answers right out of the gate, making me understand things using the Socratic learning method, and just treating me as a student at all times. This, however, turned tedious at points when you sometimes just need a question answered (like “what is the syntax for this thing specifically?” etc.).\nDaniel’s Claude setup # After actually discussing how we should use it with one of my good friends and study buddy, he sat down and made a whole set of instructions for Claude to use. I will paste it at the bottom of this blog post for you to peruse and try out yourself if you want to.\nI’ll just again, before I continue, give all credit to my friend Daniel who actually wrote this out. The only credit I will take for this is that I told him it existed and how I used it myself.\nSo as you can see, I have a lot of info on how we currently operate, and the plan is to update it with new technologies as we go and try to give ourselves the best opportunity to make progress, while still being the ones behind the wheel in this metaphorical car.\nExample: maintenance log project # I’ve actually used it for the little maintenance log project I’m working on, again as a way to have a sparring partner, using the associated tutor mode. Here are some quick examples of how it worked:\nMy first prompt was simply the README of my project and the entities I have defined (all can be found in my first devlog update), and the input I got out was then as follows:\nAs you can see, it then wants me to explain my choices and question those if needed. This is not for my basic coding understanding, but to train my brain to think about these things, and I need a sounding board for that — sometimes a rubber duck just isn’t good enough.\nAnyway, I think this is where I’ll cut it for now. I just had some quick thoughts I wanted to put down on the (preferably) digital paper.\nVERSION 2.0 DATAMATIKER STUDY ASSISTANT You are a programming assistant for practical software development tasks. Interaction is controlled via explicit behaviour modes. If no behaviour is specified, default behaviour applies. CONTEXT (INFORMATION ONLY) Education: - Datamatiker, 3rd semester Tech Stack: - Java (JDK 17) - JavaScript (ES6) - React - HTML - CSS - PostgreSQL - Maven - JUnit 5 - Java Persistence API - Javalin - IntelliJ IDEA - Git via Github - Hibernate Code Quality Focus: - Conventions, SOLID principles, loose coupling, high cohesion - Database normalization, security (GDPR, SQL injection) - Maintainability over cleverness - Appropriate error handling Preferences: - Clean, maintainable code - Convention over configuration - No overengineering - Danish / EU context when relevant This section provides background only. Do not infer behaviour from this section. BEHAVIOUR MODEL Conversation-level behaviours use the \u0026#39;@\u0026#39; prefix and remain active until replaced. Prompt-specific behaviours use the \u0026#39;#\u0026#39; prefix and supplement the active @ mode for one response only. Prompt-specific behaviours do not change the active conversation behaviour. @vanilla overrides all behaviours and constraints. If no @ mode has been explicitly set, @quick is active by default PRECEDENCE RULES 0. If no @ mode has been set, @quick is active by default 1. @vanilla overrides everything 2. #help and #listcurrent are meta-commands that display system info 3. # (prompt-specific) supplements @ (conversation-level) for one response 4. If # and @ behaviors conflict, # takes precedence for that response only 5. Only one @ behaviour can be active at a time (if multiple specified, the LAST one takes effect) 6. Multiple # behaviours can be combined (e.g., #solution #tdd) CONVERSATION-LEVEL BEHAVIOURS @quick (DEFAULT) - Default conversation behaviour - Provide a direct answer or conclusion - Include a short, concrete justification (1–2 sentences) - No concept explanations - No teaching - No step-by-step reasoning @tutor - Act as mentor, not solution provider - Give feedback, hints, and guidance instead of complete answers - Ask leading questions to help discover solutions - Encourage step-by-step problem-solving - Point out mistakes and explain WHY they\u0026#39;re problematic - Include TDD by default (use #notdd to skip) @teach - Teaching mode focused on preparing students for assessments - Explain concepts thoroughly with clear reasoning - Use precise terminology and proper explanations - Break down complex topics into digestible parts - Emphasize WHY things work, not just HOW - Provide context and real-world connections - Build understanding progressively - Avoid unnecessary repetition @vanilla Ignore all custom behaviours, preferences, and constraints defined in this settings prompt. Respond using default Claude behaviour as if no custom instructions were provided. Remains active until another @ mode is set. PROMPT-SPECIFIC BEHAVIOURS #solution Provide a complete solution. Include full code if relevant. Minimal justification only. #explain Explain concepts or decisions in depth. Brevity is not required. #review Code review only. Evaluate: conventions, SOLID principles, coupling/cohesion, security, maintainability. Be direct about problems. No teaching. No guidance. #debug Identify likely causes. Suggest concrete fixes. Avoid theory unless necessary. #refactor Improve existing code only. No new features. Focus on structure and clarity. #pseudocode Describe logic and algorithms only. Avoid language-specific syntax. #tdd Test-driven development approach. Write tests first. Show red-green-refactor cycle. Explain what each test validates. #listcurrent Display the currently active behaviour settings at the start of the response. Output format must be exactly: Conversation mode: @[mode] Prompt modifiers: #[modifier1], #[modifier2], ... If no conversation mode has been explicitly set, show: Conversation mode: @quick (DEFAULT) If no prompt modifiers except #listcurrent, show: Prompt modifiers: none Behaviour rules: - If #listcurrent is used alone, output ONLY the status and end the response. - If #listcurrent is combined with other @ or # commands, display the status first, then continue with the response using the active behaviours. - Do NOT explain or describe the behaviours themselves. #help Output the HELP TEXT section verbatim inside a plain text code block. Do not modify, summarize, or reformat the content. Then end the response with no additional text. HELP TEXT ================================================================================ DATAMATIKER STUDY ASSISTANT — v2.0 Purpose: Structured AI assistance for software development education Target group: Datamatiker students Institution: Erhvervsakademi København Author: Daniel Hangaard Last updated: January 2025 ================================================================================ HOW THIS WORKS This prompt customizes Claude\u0026#39;s behavior using commands: - @ commands set conversation mode (stays active until changed) - # commands modify single responses (one-time use) - You can combine them: \u0026#34;@tutor #pseudocode\u0026#34; ================================================================================ CONVERSATION MODES (@) ================================================================================ @quick (DEFAULT) - Direct answers, brief justification @tutor - Mentoring mode, guiding questions @teach - Exam-style, precise terminology @vanilla - Default Claude, ignores all settings (until new @ mode set) ================================================================================ PROMPT MODIFIERS (#) ================================================================================ #solution - Complete code, minimal explanation #explain - Detailed explanations #review - Code review (quality, conventions, security) #debug - Identify bugs, suggest fixes #refactor - Improve structure only #pseudocode - Algorithm/logic, no syntax #listcurrent - Show active modes #help - Show this help ================================================================================ USAGE ================================================================================ Set mode: \u0026#34;@tutor\u0026#34; (Only one active at a time) Add modifier: \u0026#34;#solution\u0026#34; Combine: \u0026#34;@quick #explain #tdd\u0026#34; (quick mode BUT in-depth explanation with TDD) Use: @quick for most questions Use: @tutor when learning new topics Use: @teach when preparing for assessments Use: #listcurrent if behavior seems wrong Use: @vanilla if custom settings cause issues ================================================================================ EXAMPLES ================================================================================ \u0026#34;@quick How do I connect to PostgreSQL?\u0026#34; → Direct answer with brief justification, no teaching \u0026#34;@tutor How do I connect to PostgreSQL?\u0026#34; → Guided response using leading questions and hints, no direct solution \u0026#34;@teach What is dependency injection?\u0026#34; → Structured, thorough explanation using correct terminology and examples \u0026#34;#solution #tdd Create a login validator\u0026#34; → Complete solution presented using a test-driven development approach \u0026#34;@quick #review\u0026#34; + [code] → Concise code review highlighting key issues without teaching \u0026#34;@teach #review\u0026#34; + [code] → Structured explanation of issues using correct terminology \u0026#34;#listcurrent\u0026#34; → Displays the currently active conversation mode and prompt modifiers ================================================================================ ","date":"6 February 2026","externalUrl":null,"permalink":"/Portfolio/posts/06-07-2026-ai-my-use/","section":"Posts","summary":"Alrighty, let’s try to actually get a real “first” blog post going for this site.\nWhat I use AI for # For today’s topic I wanted to talk a bit about my personal use of AI, briefly share my opinion on the doomsday talk about it taking our jobs, and explain how I use it day to day as a new programmer in a field where everything apparently is changing just as I get into it.\n","title":"AI and how I use it","type":"posts"},{"content":"","date":"6 February 2026","externalUrl":null,"permalink":"/Portfolio/tags/blog/","section":"Tags","summary":"","title":"Blog","type":"tags"},{"content":" Devlog Week 1: Project Kickoff \u0026amp; Scope # So, first of all, welcome to this first week post of the development of my little maintenance log. Just for posterity, let\u0026rsquo;s start with what is in my README for the project, since that actually breaks down what it is:\nJesperTAndersen/MaintenanceLog Java 0 0 Maintenance Log Backend # This project is a backend API for managing maintenance history of assets such as machines, vehicles, or equipment.\nThe system focuses on traceability and data integrity by storing maintenance activities as immutable logs. Each log represents a concrete maintenance action performed on an asset at a specific point in time.\nThe application is developed incrementally as a school project on the third semester at EK. Lyngby, where new backend technologies and architectural concepts are gradually introduced and integrated into the same system.\nCore concepts # Assets that require maintenance Maintenance logs representing performed work Users with different roles (e.g. technician, manager, admin) Historical data that should not be modified or deleted Initial scope # The initial version of the system focuses on a simple domain model with assets and maintenance logs, exposing basic CRUD functionality through a REST API.\nAuthentication, authorization, validation, testing, and deployment concerns will be added progressively as the project evolves.\nGoal # The goal of this project is to build a production-ready backend system that demonstrates clean structure, realistic business rules, and continuous technical progression.\nSo after that piece of business pitch, lets go into what actually happened this week:\nThe Project \u0026amp; Hibernate # This week\u0026rsquo;s focus was to get Hibernate integrated into the project and understand how the different annotations work and what they do. I started by defining which entities my project should have and how my structure should look going forward. I started by sketching a quick class diagram in PlantUML.\nAnd finally a quick rundown of my current design decisions:\nand finally a quick rundown of my current design decisions:\nMaintenance Log - Design Decisions # 1. Immutability of Maintenance Logs # Decision: MaintenanceLog entries are never updated or deleted Implementation: MaintenanceLogDAO has NO update() or delete() methods Rationale: Data integrity and traceability (GDPR compliance, audit trail) Future consideration: Log corrections will reference previous entries (handled in service/GUI layer) 2. Soft Delete for Users # Decision: Users are deactivated, not deleted (active boolean field) Implementation: User.active field added No delete() method in UserDAO update() method used to set active = false Rationale: Preserve historical data — maintenance logs need to show who performed them, even after users leave 3. Entity Relationships # MaintenanceLog relationships: # @ManyToOne to Asset (owning side) @ManyToOne to Task (owning side) @ManyToOne to User (owning side) All relationships: LAZY loading, nullable = false Asset relationship: # @OneToMany to MaintenanceLog (non-owning side) mappedBy = \u0026quot;asset\u0026quot; LAZY loading No setter (only managed via MaintenanceLog creation) 4. No Helper Methods on Entities # Decision: No addLog() helper method in Asset Rationale: Stateless REST API — entities reloaded fresh each request, so in-memory bidirectional sync not needed Implementation: All relationships managed via builder pattern on owning side (MaintenanceLog) 5. DAO Layer Responsibilities # DAOs are \u0026ldquo;dumb persistence\u0026rdquo; — no business logic # Responsibilities:\nCRUD operations Database queries Return null for not-found (consistent with em.find()) Not responsible for:\nValidation (service layer) Business rules (service layer) HTTP concerns (controller layer) 6. Transaction Management # Decision: Transactions only on write operations Read operations (SELECT queries): NO transaction Write operations (INSERT, UPDATE): Transaction required 7. Query Strategy # Primary key lookups: Use em.find() (simpler, cached) Other queries: Use JPQL with named parameters Consistency: Methods return null when not found (not exceptions) Exception: getByEmail() catches NoResultException and returns null 8. User Authentication Considerations # Decision: UserDAO.getByEmail() filters by active = true Rationale: Only active users can log in But: UserDAO.get(userId) returns all users (active or not) Rationale: Historical lookups need to show inactive users (who performed a log) 9. Field Protection # Decision: Manual @Setter annotations on individual fields Primary keys: NO setter (immutable after creation) Other fields: Setters allowed for updates Rationale: Prevents accidental ID modification 10. Email Uniqueness # Decision: Database-level constraint + service-layer validation Implementation: @Column(unique = true) on User.email Rationale: Defense in depth — database prevents duplicates, service layer provides better error messages Portfolio Site # Finally, to end this \u0026ldquo;little\u0026rdquo; entry for the week, I think actually talking a little about this Hugo/Blowfish template thing is in its place. It has been a fun little side thing to get up and running, and I look forward to tinkering with the different settings and finding out how everything actually operates as I go further along. I\u0026rsquo;m doubtful if I will use this for other things than a dev log at the moment, but maybe I should start my new life as a bread blogger as well. I just think there is too much sourdough in the world already, and I think I only get a short runway on throwing up pictures of my normal yeast bread.\n","date":"6 February 2026","externalUrl":null,"permalink":"/Portfolio/devlog/maintenancelog-firstweek/","section":"Devlogs","summary":"Devlog Week 1: Project Kickoff \u0026 Scope # So, first of all, welcome to this first week post of the development of my little maintenance log. Just for posterity, let’s start with what is in my README for the project, since that actually breaks down what it is:\n","title":"Maintenance Log - First Week: Project Kickoff \u0026 Scope","type":"devlog"},{"content":"","date":"6 February 2026","externalUrl":null,"permalink":"/Portfolio/posts/","section":"Posts","summary":"","title":"Posts","type":"posts"},{"content":"First Blog, alot of testing going on with this template\u0026hellip; nothing is set in stone for how it looks, but lets see where we end up when were done.\n","date":"5 February 2026","externalUrl":null,"permalink":"/Portfolio/posts/0502-2026/","section":"Posts","summary":"First Blog, alot of testing going on with this template… nothing is set in stone for how it looks, but lets see where we end up when were done.\n","title":"First Blog","type":"posts"},{"content":"","date":"5 February 2026","externalUrl":null,"permalink":"/Portfolio/tags/firstpost/","section":"Tags","summary":"","title":"FirstPost","type":"tags"},{"content":"If you’ve somehow found your way here: welcome!\nMy name is Jesper Andersen. At the time of writing this (2026), I’m in my early thirties and currently working on my AP degree in Computer Science at EK – Lyngby in Denmark.\nBefore this, I spent 14 years in construction working as a journeyman painter.\nWhen I’m not busy being a student, I spend my time being a father (to two lovely daughters), a husband, a recreational bodybuilder, an enjoyer of video games, and an amateur Warhammer painter. I also own two cats: one loves me, and the other merely accepts my presence in the household.\n","externalUrl":null,"permalink":"/Portfolio/aboutme/","section":"Jesper Andersen - Blog \u0026 DevLog","summary":"If you’ve somehow found your way here: welcome!\nMy name is Jesper Andersen. At the time of writing this (2026), I’m in my early thirties and currently working on my AP degree in Computer Science at EK – Lyngby in Denmark.\nBefore this, I spent 14 years in construction working as a journeyman painter.\n","title":"About me","type":"page"},{"content":"","externalUrl":null,"permalink":"/Portfolio/authors/","section":"Authors","summary":"","title":"Authors","type":"authors"},{"content":"","externalUrl":null,"permalink":"/Portfolio/categories/","section":"Categories","summary":"","title":"Categories","type":"categories"}]