Interviewing for a successor

Posted on Fri 11 May 2018 • Tagged with Institute for Computer Vision and Computer Graphics, Work

I left my job at the ICG in March 2018. One of my last tasks there was helping in searching for a successor for my position whom I could hand over my responsibilities with as little worries as possible. I updated the same job posting that had been used to announce the opening when I applied and updated it with new phrasing. I wanted to emphasize that a lot of learning can be done on the job. Experience in the comprehensive list of open source technologies the institute uses was a definite plus but I was certain that a minimum of understanding of Linux, good written and spoken English as well as the willingness to learn were enough to grow into the job. After all, usually people apply who do not have all qualifications matching your list but some that are not on the list and help them anyway.

I wanted to make sure that we had as much of an objective method to judge the applicants as possible — therefor I put together a questionnaire containing two real life scenarios as well as a short list of bonus points. These questions were discussed with the applicants and I decided which topics were sufficiently answered. I held the entire technical part of each interview.

I want to point out that my goal was not — as some of my colleagues joked — to create a test which one could “pass” or “fail”. I simply wanted to measure applicants by a more meaningful measure than “they were good” or “they were ok”. I had the hope that my scenarios would give us a heads-up whose technical knowledge was better if applicants were subjectively close to each other.

Section 1 - VM diagnosis & rescue

You have a physical machine running a Hypervisor (e.g. Xen) and a virtual machine running a Debian based Linux distribution (e.g. Ubuntu). You notice that the VM has stopped checking in with your monitoring solution. What do you do?

- contact via SSH
- check if the machine is listening (e.g. `ping`, `nmap`)
- check if the machine is running (e.g. `xl list`, `xl top`)
- send out notice that you're working on said machine (*bonus*)

The initial step of the diagnosis is for steps one can take really quickly. I accepted solutions that did not name command line utilities suggested if they served a similar purpose (e.g. VBoxManage would be fine). Bonus questions give additional points that can raise the score above the maximum points of a given question.

You have established that the machine is indeed not running. When you tried to restart the machine via the hypervisor, it is showing activity in the hypervisor output but it is neither accessible remotely (via SSH) nor does it show up in the monitoring solution. What are your next steps?

- check log files
  - host logs => there is nothing relevant in them
  - guest logs
  - centralized logging solution (*bonus*)
- try starting the machine with more verbose output from the hypervisor (*bonus*)
- check with some tool that displays screen of VM (e.g. VNC with SSH forwarding, `virt-manager`)

The second step is trying to figure out the cause of the issue after having verified the issue in step one.

You realize that the machine is not booting. It looks like a problem with GRUB but you are not entirely sure. You’d like to access the guest logs, just to be sure. The guest’s entire disk is a LVM logical volume mounted directly into the VM by the hypervisor. How do you proceed?

- find a tool to mount the logical volume on the host
  - read-only (*bonus*)
  - `kpartx` (*bonus*)
- check the logs in `/var/log/syslog` and similars in `/var/log`. Check `/var/log/dpkg.log`.

Step number 3 is to make reasonably sure it is a problem that has surfaced due to a problem with GRUB and has not been triggered by something else entirely.

The chance that it is a GRUB problem is more likely than ever. How do you proceed to try and fix the VM?

- boot from ISO (or remount read-write on host)
- `boot-repair` (*bonus*)
- reinstall GRUB

The last step of the first scenario deals with an actual attempt at fixing the VM. The infrastructure at ICG is built in a way that makes repairs more feasible than spinning up and configuring new machines without data loss.

Open question: What do you think could be the cause of such an issue?

No points were given for this question, but I noted down what the applicants came up with and commented on the likeliness of their thoughts, so they had some immediate feedback.

Section 2 - Server best practices

You have a service that you need to provide to the whole internet (or rather, your colleagues who are currently abroad). It has at least one component accessible by a web browser and one more component (e.g. SSH, IMAP, POP) that needs to be protected. How would you make reasonably sure that things are protected?

- protect the web service with a TLS certificate [and encryption]
- redirect port 80 to 445 to always enforce encryption
- implement a rate limit against brute force attacks (e.g. `fail2ban`, builtin software)
- have the server update the software on its on (or have a way to be notified of updates, e.g. mail, RSS)
- implement a backup strategy [and test it]
- provide VPN access or suggest using TU VPN and restrict firewall settings (*bonus*)
- **set up monitoring for aforementioned things**

The server best practices section was my attempt to get a feel for what the applicant knows about operations. While the previous scenario revolved around troubleshooting, this one is focused on knowledge and understanding of running servers in production. This was a question where I almost always received additional answers to the ones I hoped for.

Section 3 - Short questions

Do you have any experience with:

- Git
- Continuous integration (e.g. GitLab CI, Jenkins)
- Configuration management (e.g. Puppet, Chef, Salt)
- standard monitoring tools (e.g. Nagios, Sensu, Elastic products)
- NFS and auto-mounting
- web servers (e.g. Apache, Nginx)
- debugging software not written by you (e.g. Python code that shipped with your distribution)

This last section of questions aims to establish which topics the applicant might need training in order to fully understand and utilize existing ICG infrastructure.


After careful review of all applicants and their technical skills and demonstrated understanding of systems in use I gave an informed recommendation on whom to hire. I had the — very short — opportunity to introduce my successor to the most critical systems. For everything else they will have to rely on the documentation I wrote, their team members and their own skillset.

I certainly wish them all the best.

Ljubljana 2018 (BSides Ljubljana)

Posted on Sat 24 March 2018 • Tagged with Journeys

Going to Ljubljana for BSides Ljubljana 2017 was comparatively without troubles, not counting my scheduling difficulties resulting in several annoying waiting times.

Day 1

I took the train from Graz and read a book I had previously purchased but never read, as I usually do when trying to pass time. When the scheduled arrival time came I was a bit nervous as I feared I might have missed my stop. Several minutes after the scheduled time I still didn’t hear an announcement saying the next stop would be Ljubljana. Nervousness turned into slight annoyance. By now I just assumed I had either missed the stop - which was bad - or the train was delayed - which was slightly less bad. I didn’t need to catch a connection. It just meant the people responsible for the apartment would have to wait even longer for me though.

I left the train and wondered why the train station would be so small. In retrospect I only saw the underground and the back exit on that day. The actual train station is larger, though not especially large. It’s also a short walk away from most of the tracks, which was part of the reason for my confusion.

Since - obviously - leaving the train station through the wrong exit does not lead one directly to waiting taxis, I had no luck there. So I asked Google Maps to find me an ATM and plot a route to my accommodation. The suggestion included a bus ride with pricing information for the bus. Upon entering I saw people paying with a contactless card. No one paid in cash as it is common in Graz. I asked the bus driver but he waved me away, so I took a seat, next to a helpful information panel. Said panel happened to spell out the usage instructions for the contactless payment but not where to get such a card.

Once I arrived at the loft - Tobacna Red - it really was as nice as the reviews and images had suggested. I don’t remember asking my contact for information about the Urbana bus cards which was an oversight. I ventured out to find something to eat and asked in a kiosk for the tickets because I had read earlier on the Internet that you could “get them basically everywhere”. So, yeah, newsflash. Don’t believe everything on the Internet, regardless of how nicely made the site is. Anyway, the friendly lady at the kiosk couldn’t help me and I had happened to find a resident whose English wasn’t up to explaining me where to go either though she clearly understood what I meant.

I had lunch at Meta in Bazilika where the hint from the waiter not to take the “wok risotto” should’ve been a clue not to eat there. Or the fact that despite the nice weather both the garden and the interior were completely empty.

I’m honest, the wok risotto… don’t take that. It’s just not good.

Well, thanks for that, but neither was the risotto with turkey and tomatoes. Now, I’m not a cook, so what do I know… but you might want to try seasoning the turkey next time. Or making the risotto actually creamy. However, the waiter also gave me the hint that the Urbana card was available “in the center”.

After venturing there, I found a tourist information spot which sold the cards. Paid €2 for the card and €5 for an initial charge. Then I walked up the castle hill but by the time I was done taking some pictures and having an initial walk around the castle it was too late to go inside with only 20 minutes remaining.

View of some part of Ljubljana, taken from the castle hill in the afternoon sun

A tower as part of the castle in Ljubljana, Slovenia

So, down into the old parts of the town it was. I fancied a cake and looked around until I found a restaurant with great looking Tiramisu visible from outside. Sadly, the waiter told me he could not sell me the Tiramisu. It was reserved for dinner guests and official dinner hours wouldn’t start until later. However, he pointed me to a café which serves great cakes. I checked out Slaščičarna pri Vodnjaku. The cake made with Nutella and bananas was delicious. The tea was… okay I guess. I have rarely had fruit tea that was that sour though - not sure what was in there. I even ventured back to the restaurant to thank the waiter for his suggestion after checking whether he was currently busy. That earned me another recommendation - Le Petit Cafe which served excellent breakfast, according to him. Now, breakfast isn’t really my time of day, but this opinion slightly changes when it’s served until 1 PM.

After that I was getting tired, so I grabbed a Sub for later and headed home. By foot, since the route planner didn’t suggest any buses. After checking I realized why. Going by foot was 12 minutes. Waiting for the next bus would’ve been 24 minutes. Getting over that annoyance was several days.

Day 2


Due to sleepiness I only attended the last few seconds of the BSides keynote even though the event was literally in the next building. Also, before I forget, the videos have been archived at If you want to watch just one talk, make it “The consequences of bad security and privacy in healthcare” by Jelena Milosevic.


The first talk I attended was Security Automation in CI pipeline. I considered most of the lessons from there obvious, but this is after working as a developer and as an admin with a CI pipeline I built due to personal interest. Basically if things can be automated to avoid problems, let’s try to automate them. I don’t think many companies have existing pipelines in place that allow for testing security in a reproducible and automated way. Of particular interest to me was the way this was suggested in the talk.

The (GitLab) pipeline had a test stage, a deploy-to-staging stage, ran the security tests against staging and afterwards deployed to production. I like this idea but am somewhat curious how much delay this separation adds. I usually try to increase parallelism and would’ve preferred an approach in which the security testing isn’t adding 2 mores stages. My preference for this is because stages are run sequentially while jobs in the same stage can be run in parallel. (Gitlab terminology and CI doc)


I listened to the last words from the first talk in track 1 since my talk ended early. The presenter had to defend his work and lecture since no one outside the corporate/government environment actually felt the need to decrypt QUIC and TLS 1.3 traffic. I sat down for Trape – the evolution of phishing attacks.

I don’t think I know quite enough about how phishing attacks and persistence on machine are typically done to properly evaluate the use of Trape. Quite frankly, while the automated profiling of social media and general website accounts seemed handy, they didn’t impress me. Yes, that was certainly convenient but I hardly found exploitation of browser implementation details from a local server all that exciting.


The consequences of bad security and privacy in healthcare was my favorite talk this BSides. It wasn’t purely technical nor was it theoretical. Instead, it was a window into how hospital IT security is often run. Opsec as seen in reality. Some of the results where really bleak and quite frankly, horrifying in terms of possible implications for abuse of power, abuse of data or loss of data.

Here’s a quote - which I note from my memories instead of the stream, so it might not be entirely accurate:

So, I asked them, have they upgraded all systems and secured all things properly. And they answered, yes, of course, everything is fine. But then you find a blood bank running on Windows XP.

These are the scenarios that make you shiver as someone with even a faint interest in information security. Mission critical infrastructure running on an OS of which even the successor has already been retired.


There was pizza. Pizza is the default for BSides events from what I’ve seen so far, except when you’re in the land of pizza in which case there’s a mixed buffet arranged by a catering firm.


Someone made a joke up front how the Docker security talk would probably be short. It was. It was extremely short and disappointing. I joined the talk in the hope of learning something valuable that might be substantial to gaining an understanding of the security aspects of a technology I had almost no experience with yet.

There are two sides to this talk: One was great and one was depressing. The depressing part was how the advice for Docker security came down to three bullet points:

  • don’t use --privileged
  • don’t mount the Docker socket inside the container
  • don’t use the docker group and prefer usage of sudo instead

I have furthermore been told that this should’ve been extended by at least:

  • drop the root privileges in the container
  • if possible
  • as soon as possible

Now, the cool part of this was that the speaker demonstrated the ways each of these flaws could be used to gain root on the host. Frankly speaking, that these kind of configurations might be deployed to production are a bit terrifying.


The speaker in How (not) to fail as a security professional [Lessons learned] has been working in InfoSec, development and administration for years and shared some advice how to fail. While the talk was indeed very entertaining and certainly helpful, I don’t remember a good lot of it. One should think that not being an asshole and never stopping to learn would be a good starting point for people in any career. Also, writing articles about individual talks several weeks after the talk without any notes isn’t particularly easy…


The keynote speaker, Finux threw up an impromptu version of the third part of his privacy focussed lecture. I’ll be frank, I didn’t like part 1 a lot in 2016. However, I was positively surprised by the content and the blend of disciplines in this one. The impact of architecture on the concept of privacy was a fascinating topic I’d probably never have considered getting informed about.

CSides, so to say

After listening with a sharp mind for the whole day I wanted some relaxation and went to one of the fancier restaurants. I wasn’t exactly sure what to go for, but ended up in Vander restaurant, eating boar and fancy dessert. The city is lovely in the evening - even when it was pitch dark, people were still out and about, huddling around heating lamps and enjoying their drinks near the river. The atmosphere was amazing and I struggle to imagine how nice it has to be when it’s not too cold for my taste. Ljubljana’s cafés also happened to have fruit tea in stock which was a huge step up from my Rome visit. ;)

Shot across the river, people sitting around heating lamps in front of a brightly lit bar, shot taken during the dark of the night

Day 3

I checked out at 11:00 and sat around until 16:00 when my train left back for Graz. The weather wasn’t suited for grand adventures given that there was constant slight rain that made the perceived temperature drop. I’m already constantly cold, so no need to stay outside longer than necessary in suboptimal conditions. Still, I was inclined to check out the café and headed there. I arrived and it was packed. Even the tables on the outside below big umbrellas with heating lamps were full.

Resorting to the Café Lolita where I had seen the waiter juggling the evening before, I had the most delicious Black Forest cake I’ve ever tasted. I ordered that with “hot chocolate” and was pleasantly surprised when I actually got hot chocolate instead of the regular cocoa. As an aside, I order hot chocolate since I’m used to getting cocoa and the term seems to be more common in the foreign countries I’ve been to yet than just cocoa.

A rectangular dish with a small piece of Black Forest cake. Behind the dish a cup with liquid brown chocolate

Since I sat there for several hours, I also had non-alcoholic punch which was very tasty. I liked the berries and mandarin oranges a lot. I wholeheartedly recommend this place.

Of course, no place is perfect.

I realize I’m the stupid tourist her[e] but wouldn’t you want to label your restrooms in your prime location cafe in a way that is somewhat clear to foreigners? ~Alexander Skiba (@ghostlyrics), March 11, 2018

A tall glass filled with red punch. It has fruits swimming in the punch

After some more sitting around and waiting I finally walked to the train station, all the while looking for some kind of food place along the way. None of them tickled my fancy, so I boarded hungrily and made for the dining car after a while. Food there was rather plain, but I liked the open car. The low chair backs and plush seats combined with large panorama windows reminded me of the Murder in the Orient Express movie that had impressed me last year.

A wide open dining car with cozy benches the low backs of which offer a great view of the scenery through panorama windows

As an aside, I did check out the train station hall and noticed something that would’ve helped me a lot on my first day: Of course, the tourist information point inside the train station would have been the other viable option for purchasing an Urbana card. Had one realized that there was a main building. Had one bothered to check inside.

Reading recommendations (2018-02-19)

Posted on Mon 19 February 2018 • Tagged with Reading recommendations

I spend most of my time with Final Fantasy XII recently which has been remastered for PC and is as great as I remember. Some light reading, novels and writing job applications are what the rest of my free time was invested in. Apart from that I continue to play Final Fantasy XIV but I write about that from time to time anyway.

Final Fantasy XIV: Stories of Departures

Posted on Mon 29 January 2018 • Tagged with Stories, Video Games

This wasn’t an easy post to write but I still needed to get it out. You can ignore the following while muttering #mmoproblems to yourself. I won’t fault you. I’d still appreciate if you kept on reading though.

Kakysha lying in bed and contemplating

I have been thinking. There’s an aspect to playing an online game that was somewhat unexpected to me - you bond with people even though you don’t personally know them. You log in every so often and run with the same crowd (yes, I totally typoed that into “crown” at first). You check in with the regulars from your Free Company (read: guild). You have a set of people in your friends list. Maybe you have some additional linkshells (read: private group chats) that you like to visit every so often.

There’s a certain comfort in seeing familiar… well, not exactly faces. You meet avatars, fantastical characters that sometimes make you forget you’re there together with real people. For every player character there’s a person sitting somewhere behind a keyboard or a gamepad (well, almost, but botting is technically against the TOS).

I’m not a person to bond or trust easily. That’s just my personality. The interaction by proxy, like the ingame avatars makes things much, much easier though. I can still be witty, make stupid jokes, annoy others with inappropriate comments and help them all the while. But if I decide to cease interaction, that’s easier too. That’s the part where your brain tricks you into thinking even people you have spent many hours with are not important because they’re “hidden” behind characters.

I have watched the ebb and flow of people in our FC. FFXIV is a highly cooperative game, so you feel the impact of fellow players not being around anymore. It’s not necessarily that you’re losing. It’s the feeling of loss despite achieving your goals. The lessened atmosphere. The absence of a familiar friendly face.

This post was prompted by someone whom I consider a good friend leaving the FC. But the thoughts behind it have been true for a while now.


Whenever people leave I wonder what their reasons were. Were they unhappy? Did they get into an argument? Did their friends wander off? Or perhaps something else alltogether?

I try to talk to people, then. Yes, talking is hard, I get that. However, I consider not trying a personal failure. It’s not that I have the need to convince people to return. My curiousity drives me to learn their reasons for leaving so that perhaps the FC can be a more friendly place in the future with fewer reasons for members to leave.


I remember a while ago when a group of friends left. They were open to discussion and it was clear from the beginning that the group had only sought temporary refuge at The Black Crown. Them leaving to start up their own Free Company was a decision that was given a lot of thought. They are still open to communication and it was a pleasure to host them as long as it lasted.

I remember someone leaving who was a roamer. It’s hard to quantify how many of players are this type of person, but they did not stay long in any FC. They even said so up front and close to no one gave it a lot of thought when they left, eventually. It hit a bit harder when their partner in crime left because they had earlier stated they would not leave together, but it wasn’t completely unexpected either.

I remember talking to Kakysha’s big brother at length why he left his previous FC, how he talked about a feeling of not belonging and why he preferred to play in solitude for a while. I think he described it as feeling alone in a crowd. I’ve suggested back then that perhaps it was not the right crowd for him while at that time not directly inviting them into our FC because I felt that was tactless. I merely stated he was welcome should he ever want to join. Kakysha and her brother rarely meet - Eorzea is a big world after all, but they enjoy each other’s company tremendously.

Kakysha discussing important issues with her bigger brother in Kakysha's room

I remember my honored friend leaving with neither farewell nor complaint. It still hurts. I inquired for their reasons and received a vague answer that perhaps it was due to an argument or something that another person might have said. Polite inquiry would not reveal a more concrete answer and I respect my friend too much to be nosier, even though I’m implictly required to be since my recent promotion to a leadership position in the FC. I was merely saddened that they neither tried to talk to leadership nor the person(s) in question. Without pointing out what exactly was wrong and talking through both actions and consequences, how can we strive to improve the trust and respect that I feel we owe our members? How can I try to provide sprouts (read: newcomers still in the early stages of the game) as comfortable a home as the Seraphs provided me when I was full of disappointment about my previous FC?

Should you read this, friend, good bye but not farewell. Know that you have a place at Crown, should you want to return.


Kakysha sends her greetings from Tamamizu where’s she’s still trying to gain the favor of the Kojin people so they grant her permission to obtain a striped ray. She’s looking forward to meeting the Ananta people though because she heard they are breeding elephants. Our favorite adventurer loves elephants. She said to tell you she’s sorry this isn’t a more story-heavy post.

Final Fantasy XIV: Stories from the Regulars

Posted on Tue 24 October 2017 • Tagged with Stories, Video Games

I’ve been playing Final Fantasy XIV (FFXIV) for more than half a year now. I can say without doubt that I have yet to run out of interesting stories to experience. The game is just so packed full with activities and seemingly none of them come without their own stories and lore.

After you have progressed to a certain point in the story, you can develop relations with beast tribes in the game, earning their respect and helping them develop their society. There is a long quest chain revolving around clumsy inspector Hildibrand Manderville. Even progression for gathering and crafting is involved with their own quests and stories.

I will not deny that there is a bit of stretch between patches when you have completed all that is to the current main story. I don’t think you can really prevent that while ensuring that the quality of the main story is top notch. Releasing too often might have the quality of the writing and the setting suffering and I most certainly do not advocate that tradeoff.

Let me tell you another chapter of Kakysha Saranictil’s story. Last time I told you about her role in the liberation of Doma and Ala Mhigo. Kakysha has not been idle for long even though she managed to sneak in a little vacation at the Rhalgr’s Reach holiday resort with Deithwen Addan, a long time friend.

Deithwen and Kakysha sitting on a high stone column looking down on Rhalgr's Reach

She was pleasantly surprised when Jenji Seraph also popped in. Images worthy of post cards were taken. “Wish you were here!”, to send home to other people in the Free Company.

Jenji, Deithwen and Kakysha taking a swim

Between helping with the efforts of rebuilding Ala Mhigo, she was helping out the Kojin beast tribe - a people of bipedal turtles - coming to terms with the conflict between the red and the blue faction of their people. She had some very busy weeks heading more into diplomacy and getting an understanding of more beast tribes. She reached an understanding with the Sahagin, fish people of La Noscea. She helped the colony of Vath, bug people of the Dravanian Forelands. She danced with the Vanu, bird people of The Sea of Clouds and even spoke to the Kobolds, who are mine dwellers of La Noscea. All that running around left her pretty exhausted but she found a serene place to relax in the newly arranged garden of her Free Company.

Kakysha sitting in a garden, below a tree and between falling cherry blossoms

Kakysha tried to branch into a few more different activities, like learning the Gladiator class and improving her fishing skills. Furthermore she’s been on a few expeditions to Rabanastre, not all of which have been entirely successful. She met Deithwen again when exploring the desert for places to fish.

Kakysha and Deithwen posing for an image

She vividly remembers trying to climb the big tower in Kugane. It was a frustrating experienced that she decided to abandon after a few days of trying. Making precise jumps in the middle of the night with little illumination is hard, which is why she asked her fellow climbers (via /shout) to put up their Wind-up Suns to light up the tower. The community was nice and obliged and soon they had half of the tower lit for climbers and the challenge at hand could continue regardless of sunlight.

Multiple wind-up suns illuminating Kugane tower

Lately, there has been more work with the FC and Kakysha has been all over Eorzea, helping out where she could. She went with Syn Seraph, Jenji Seraph and Hugo Razgriz to Hax Silverstone’s battle against Susano. She joined Zireael Addan’s battle against Ravana and Hax’s battle against Sri Lakshmi. The greatest memory was helping Selina Unfug getting together a party of seasoned adventurers against Vishap for The Steps of Fate. The Full Party of Selina, Kakysha, Hax, Ianna Stark, Chiyuri Nelhal, Lieselotte Harnisch, Fancydoughnut Drayon and Seraphie Eryniel made short work of the dragon and didn’t even let him reach the third part of the arena. It was by far the best performance of this challenge ever experienced by Kakysha.

Some of this more recent work was due to more active recruitment of new members to Crown, some of it was due to more people joining Kakysha’s linkshell “Kaky’s dungeoneers” which she founded with the intent of helping people find helpful and friendly players for their duties.

bonus: Kakysha’s ninja playstyle

Excerpt from Kakysha’s notes.

The usage of Mug, Armor Crush, Jugulate and Assassinate is pretty self explanatory. The only thing worth adding to Shadow Fang is that Fuma Shuriken is said to be affected by it, but I do not know this for certain.

Kakysha sitting in the middle of Doton area of effect

marking & single target

Before the fight, I prepare with Jin (Yellow) => Chi (Red) => Ten (Blue) to use Huton and be faster. I follow up with Hide since that resets the countdown for Mudra.

Depending on the situation I will open with Shadewalker+ (see macros) after the tank pulls to shift enmity for my following actions onto them.

If we’re up against a single target, I’ll use Mark (see macros) to show my fellow adventurers which enemy will get +10% damage due to vulnerability.

  • I follow this up by either Blue => Red => Yellow => Suiton and North Wind if we’re already in battle and there is a likelihood that the target will move.
  • If the battle has not yet started or I’m about to Mark a target while no enemy is alive that has ever had enmity on me, I’ll go with Hide.

The mark will then be stabbed using Overwhelm (see macros) and I’ll use as many skills that do a lot of damage while the vulnerability debuff is up, e.g.

  • Dream Within a Dream
  • Blue => Red => Raiton
  • Kassatsu => Blue => Red => Raiton
  • Spinning Edge => Gust Slash => Duality => Aeolian Edge

If I’m not pressed for time and the likelihood of Area of Effect (AoE) attacks is low, Ten Chi Jin followed by Blue => Fuma Shuriken, Red => Raiton, Blue => Suiton is a good bet. In a more busy fight I’ll stick with the faster and more mobile Bhavacakra so that the healers have less reason to hate me.

multi target

When fighting multiple targets with a group it is important to realize that enemies who have not attacked me and have never been affected by my actions are unaware I exist. If all who are aware have perished, Hide may be used again without the need for Suiton. As ninja have great damage output against single targets, this might be worth keeping in mind.

If above section does not apply, one may safely use Yellow => Blue => Red => Doton to place an AoE with some duration. If one expects the battle to be relatively short, Yellow => Blue => Futon may be preferred. Futon should also be preferred if one does not trust the tank to keep all enemies safely in one place. Personally, I dislike using Death Blossom as it feels inefficient. Furthermore I dislike hearing the sound of repeated Death Blossom as that gives me the impression my fellow ninjas are choosing the most convenient playstyle using only one button. Hellfrog Medium is also a good choice when up against multiple opponents who are already aware of one.

survival skills

You do get into some hairy situations as an adventurer. I’ll use Shade Shift if I feel that a particularly heavy blow is coming and will use Bloodbath to get some health back while the healer might be busy helping others. If the situation is very dire, I’ll fall back to Second Wind though that one cannot be used very often and will rarely be enough to make a significant difference.

Since Shadewalker cannot be used too often either, I’ll sometimes throw in a Diversion.

I cannot overstate the usefulness of Shukuchi when evading large scale AoEs, especially in raids where there is often lots of space available. Note: Do not use the skill to go through ticking AoEs, e.g. with Hashmal in Rabanastre. Shukuchi can also be used to make up for situations when the party coordination isn’t great, e.g. who attacks which target. I’ll be able to help out with other targets with less travel time after I eliminated my own.

While I’m on the topic of survival - treat Ten Chi Jin as a double edged sword - it can massively increase your damage output but also plays tricks on your mind, e.g. “nah, I’ll want to get the third attack out, the healer will be fine if I get hit by the AoE once”. When in doubt, do not use it.

little used skills

I’ll freely admit that most of the time I forget that Smoke Screen even exists, the same goes for Hyoton. I deliberately do not use Death Blossom since it rarely feels appropriate. Enmity mitigation is easier done by using Shadewalker and then letting loose a burst of damage since the tank should try to get enmity back from non-tank roles anyway.


When I’m writing macros, I try to make sure they fit some criteria. I want my macros to be helpful first, but it’s important to me that they also have some entertainment value (e.g. they have some sass) and that they fit the lore of FFXIV.

Shadewalker+ - shift 80% of enmity generated after action to target of target

/macroicon "Shadewalker"
/party Now blaming <tt> for everything that's wrong with Eorzea. auto-translate:Shadewalker
/action "Shadewalker" <tt>

Mark - point out which target will get vulnerability debuff

/macroicon Circle enemysign
/marking "Circle" <t>
/party Marking vulnerable target. auto-translate:Trick Attack imminent.

Overwhelm - cast debuff and remove target indicator after effect has ended

/macroicon "Trick Attack"
/action "Trick Attack"
/wait 10
/mk off

Prowess in battle, luck and a trustworthy selection of comrades is what brings you to the deepest chamber of the Lost Canals of Uznair. This was together with Mia Guilharthina.

Kakysha, Mia and other treasure hunters posing in the deepest sluice of the Lost Canals

Image credit:

  • “Jenji, Deithwen and Kakysha taking a swim” by ~Jenji Seraph

An explanation to a hand full of rsync parameters

Posted on Thu 05 October 2017 • Tagged with Institute for Computer Vision and Computer Graphics, Work

If you check out one of the community’s most favourite syncing and file transfer tools, rsync on any day, you will notice it has quite a lot of parameters (rsync --help). Here’s a short explanation of some of them and what they can be used for - taken from the .gitlab-ci.yml files of their respective projects.

example 1

This call is used in our automated deployment process for a web based project. It deploys the project into the correct directory, ensuring that it can still be read and executed after syncing. This is where the setting of group and user is important, since those are used for the creation of new files as well as reading of code by the interpreter. Since this is from a Git repository, it only makes sense to have a hidden file present to avoid syncing of specific files. Personally, I prefer to add --stats and --human-readable to every rsync that’s used for a deployment since you can see what changed on site in the GitLab build logs.

# In the .gitlab-ci.yml the command is on one line to avoid errors
# for the sake of readability I have reformatted the call.
# The order of the optional parameters has been changed to be
# more coherent.
sudo rsync --recursive \
           --perms \
           --stats \
           --human-readable \
           --times \
           --group \
           --owner \
           --usermap=gitlab-runner:labelme,0:labelme \
           --groupmap=gitlab-runner:labelme,0:labelme \
           --super \
           --exclude-from=.rsyncExcludeFiles \
           . /home/labelme/LabelMeAnnotationTool-master/
  • Copy everything including all files and subfolders (--recursive) from the current location (.) to the target (/home/labelme/LabelMeAnnotationTool-master).
  • Ensure that the permissions are the same at the target as they were at the source (--perms).
  • Afterwards display detailed, human-readable statistics about the transfer (--human-readable, --stats).
  • Make sure the modification times of the files at the target location matches the ones at the source location (--times).
  • Modify the group and owner of the files (--group, --owner)
  • from gitlab-runner and from the user with userid 0 to labelme (--usermap=FROM_USER:TO_USER,FROM_USERID:TO_USER,--groupmap=FROM_GROUP:TO_GROUP,FROM_GROUPID:TO_GROUP).
  • Explicitly try to use super-user operations (--super), e.g. changing owners. This will lead to errors if such operations are not permitted on the receiving side, indicating a lack of permissions or filesystem features, enabling you to detect if something went wrong.
  • Skip listed files in .rsyncExcludeFiles while syncing (--exclude-from=EXCLUSION_LIST_FILE).

example 2

This call is used when deploying our Puppet configuration from Git. It was here that I first had the need to use the --*map: features, since the files initially ended up being owned by gitlab-runner. This is fine when every file you deploy via Puppet to another machine is explicitly listed with its permissions and owner set in your codebase. If this is not the case, Puppet (3.x) will implicitly set the owner to the same UID that is used on the Puppetmaster - leading to all kinds of strange situations. To avoid this, I’m mapping owners and groups.

Additionally I’ll chmod 640 all files and chmod 750 all directories that have been synced via the call to avoid having unsafe permissions on anything. All critical things should have their permissions set explicitly in our codebase anyway.

I have arrived at this specific combination of options when I wanted to list only files that are really changed at the Deploy step. Since during downloading dependencies the cache of Puppet modules is invalidated, all external files are marked as new every time. This can be circumvented via checksumming but it still leaves the modification date of directories changed (they are set when the downloaded archives are unpacked), therefore requiring the --omit-dir-times. Now, with this combination and --verbose the logs contain only files changed in our codebase and ones that changed due to changes in our dependencies. There are no longer hundreds of files marked as changed just because r10k needed to fetch a module again.

# in the .gitlab-ci.yml the command is on one line to avoid errors
# for the sake of readability I have reformatted the call.
# The order of the optional parameters has been changed to be
# more coherent.
sudo rsync --recursive \
           --times \
           --omit-dir-times \
           --checksum \
           --sparse \
           --force \
           --delete \
           --links \
           --exclude=.git* \
           --group \
           --owner \
           --usermap=gitlab-runner:root \
           --groupmap=gitlab-runner:puppet \
           --human-readable \
           --stats \
           --verbose \
           . /etc/puppet
  • Copy everything including all files and subfolders (--recursive) from the current location (.) to the target (/etc/puppet).
  • Make sure the modification times of the files at the target location matches the ones at the source location (--times).
  • When checking which files to sync, ignore the modification dates on folders (--omit-dir-times) and rely on checksumming only (--checksum) instead of checking modification times and file size.
  • Try to intelligently handle sparse files (--sparse). I’m rather sure this ended up here without any actual cause, by picking parameters from a meta parameter.
  • Delete files at the destination that are not at the source (--delete) and include empty directories while deleting (--force).
  • Recreate symlinks at the destination if there are any at the source (--links).
  • Exclude Git specific files and folders (--exclude=.git*).
  • Modify the group and owner of the files (--group, --owner)
  • Change the owner from gitlab-runner to root (--usermap=FROM_USER:TO_USER)
  • Change the group from gitlab-runner to puppet (--groupmap=FROM_GROUP:TO_GROUP)
  • Afterwards display detailed, human-readable statistics about the transfer (--human-readable, --stats).
  • Additionally display files transferred and a summary (--verbose).


If you are using sudo in combination with an automated system in which non-admin users can access sudo without password for specific tasks, make sure you have appropriate whitelists in place. You could, for example, restrict the use of sudo to a specific user, on a specific host witch a specific command. Given that the syntax of sudoers is not as precise as it might be with regular expressions, you’ll have to be quite specific what command you’ll want to allow and where to put wildcards, should you use any. Here’s an example.

# No line breaks here, since it might confuse readers and they might end up with a damaged `sudoers` config.
your_user your_host.fully.qualified.domain= NOPASSWD: /full/path/to/command --a_parameter --another__parameter /the_source /the/target/location

Technical skills after almost 4 years at ICG

Posted on Thu 28 September 2017 • Tagged with Institute for Computer Vision and Computer Graphics, Work

Please note that this post is intentionally written in past tense to avoid having to rewrite it completely in the future.

This post aims to be a summary of technologies I’ve learned to use during my period at the Institute for Computer Vision and Computer Graphics at TU Graz.

Table of Contents generated with DocToc


While I was fortunate to avoid Nagios, I have quite a lot of experience with Sensu and its quirks. I sent several patches to their plugins and deployed a sizeable setup of checks and metrics, some of which were heavily customised to the ICG’s needs. We deployed Sensu for Linux and Windows. You might like this post about how to set up Sensu with Puppet I’ve written about part of that work. I was also active on the sensu-users mailing list.

Furthermore I deployed Logstash and Logstash-forwarder for collecting, analysing and structuring log files. This work included coming up with custom patterns for matching as well as defining configuration for ingesting logs of dpkg, syslog, apache & apache-error, nginx & nginx-error, seafile & seahub as well as fail2ban.

The collected data was available via Grafana for metrics, Uchiwa for results of Sensu and Kibana for logs, all protected behind an Apache reverse proxy with authentication via whitelisted LDAP accounts. I integrated custom URLs to easily go from Sensu to the corresponding results in Grafana and Kibana and built multiple custom Grafana dashboards. Those dashboards either displayed general information or were custom-tailored to solving particular problems in operations.

Before becoming more intimate with Sensu, I wrote our own script for monitoring the output of two CLIs of hardware RAID vendors (storcli, tw_cli).

web servers

I have worked with Apache as well as Nginx, the majority of time with Apache, setting up static websites, WSGI based applications and reverse proxies with LDAP or password authentication. A part of the work with Apache was done via Puppet modules.

source control

I was in charge of two GitLab instances - one of which I migrated from a source installation to the Omnibus package - and maintained an old Apache Subversion instance. I am a strong supporter of Git and if need be can adjust to using SVN again. I’ve helped several of the researchers to set up their projects for GitLab Continuous Integration and used the feature myself extensively for both development and administration projects - a topic very dear to me.


For monitoring purposes I have built a setup including Graphite for storing metrics data, Redis for keeping monitoring related transient data for Sensu and Elasticsearch for storing logs with Elastic curator for removing them after a defined retention period. Setup of Graphite and Redis was done via Puppet modules.

I have limited knowledge of MySQL and PostgreSQL. I was part of a team developing an application using a Postgres backend. Further tasks included creating backups with pg_dump and editing a huge database dump by hand in an editor. The thought of this should give you nightmares. MySQL tasks were mostly creating backups.

I learned a few things about LDAP while I was modifying users, groups and configuration entries during everyday operations and operating system upgrades. Given my dislike of Java, I refused to install Java and by extension avoided using Apache Directory Studio, instead writing my .ldif files by hand using templates in my editor and applying them via ldapadd.

virtualization and containerization

For various development processes I was using Virtualbox together with Vagrant for easy setup of new machines that get thrown away. In production, Xen was used - I’ve written about some of that experience. Additionally, I built several custom Docker containers for the GitLab CI. We did not use any Docker containers hosting services in production. I have written about building a container for this very blog though.

configuration management

I’ve written the Puppet setup at the institute managing many, many services. Some hosts are entirely controlled by Puppet. The configuration is deployed from a git repository to the Puppetmaster after being run through syntax tests and integration tests via GitLab and Docker.

For shorter, one-shot tasks on multiple hosts I’ve lately taken a liking to Ansible. Generally I find configuration management solutions more intuitive than ones that specify processes.


I’ve cleaned up, simplified and improved readability of an existing Shorewall setup. The entire configuration is being dry-run in the GitLab CI before being deployed to production on success.

I’ve configured and deployed TLS for several services, including LDAP, web servers, IMAP/POP3 (Cyrus), SMTP (Postfix), Rabbitmq, Logstash and more. I’ve rolled out several versions of Openssh configurations and Fail2ban deployments via our own Puppet code. Generally I’m of the opinion that even traffic in your own datacenter should be encrypted - that’s a remainder after reading that Google’s internal lines were tapped a few years back.

I was in charge during some unfortunate events where security issues popped up and had to be investigated or violations to our IT policies had to be dealt with. These policies were based on the Admin Team’s decisions and put into text by me. I’ve submitted detailed written reports about these activities to my boss.

troubleshooting OSs

While my main focus has been on Linux servers - mostly Ubuntu with some Debian - I have been busy troubleshooting problems on Linux desktops too, including broken X, crashing LightDM, missing CUDA drivers, issues with Secure Boot (and the shim-signed package). I’ve also seen my fair share of macOS problems given that we had several Mac users, including myself. Amongst the problems there were inaccessible BootCamp and completely broken installations due to users aborting upgrade processes. I have been mostly saved from Windows issues by my colleagues handling those. I have however written a tiny script allowing you to easily boot into Windows from your Ubuntu installation after realising the convenience of such a solution when using BootCamp.

upgrades and migrations

Over the years I’ve successfully upgraded many of our existing servers via do-release-upgrade or changing the Debian repository and fixed all occurring issues. I’ve also migrated a part of our infrastructure from DRBD 8 to DRBD 9 in order to replicate to more machines without layering.

Upgrading was made much easier by having all systems at a current state which I achieved by using Unattended Upgrades for many of our sources. Reading changelogs and news about new features to improve our infrastructure has been very helpful in that regard. One achievement I am very proud of is having a patch accepted into Ubuntu (Precise and Trusty).

Usually I’m working with a list of things that I check after upgrading and manually merge or rewrite configuration files which I find using the following command. A look into the logfiles of various services is always a good idea too.

find /etc -name "*dpkg-*" -or -name "*ucf-*" -or -name "*.merged" | sort

Additionally I oversaw and implemented the switch from manual configuration and firewall rule sync to a setup controlled by Puppet which is able to keep two hosts synced and configured.


This is the grab-bag area. Most things in here didn’t warrant a longer section.

I replaced a manual process of dealing with DHCP with a Puppet-controlled setup - that’s thankfully very easy using ISC-DHCP-SERVER. I deployed multiple applications that use Shibboleth authentication and worked with the central university IT section on that. I configured and deployed Mattermost and Seafile, made sure our network mounts, automounts, samba shares and Mailman instance worked and NTP is synced.


I’ve written extensive documentation for new users and the on-boarding process, documentation for the Admin Team as well as a policy section. Additionally, I published several posts about my work with permission from the ICG here on my personal website.

Furthermore I meticulously took notes on all new issues and their solutions in our GitLab issue tracker, so that a knowledge base containing previous experiences was created instead of letting all my experience evaporate.

Reading recommendations (2017-08-13)

Posted on Sun 13 August 2017 • Tagged with Reading recommendations

~Onatcer tells devastating things about the new exams required to study Computer Science in Vienna in Der TU-Aufnahme-Test: Pleiten für alle! (German).

Troy Hunt provides a new service where one can check passwords against a gigantic collection of millions of leaked passwords with Introducing 306 Million Freely Downloadable Pwned Passwords.

Currently Final Fantasy XIV’s Moonfire Faire seasonal festival is running and I already played through the seasonal quests in order not to miss anything. ~Luxpheras from the Community Team put up the post Shaved Ice Ice Baby to promote the event with some pictures of the spoils.


How I publish this blog

Posted on Mon 07 August 2017 • Tagged with Development

It was 2015 when I finally decided to act upon my dissatisfaction with the WordPress publishing process and move to a different solution. I exported my posts and pages from its MySQL database and moved on to Pelican - a static site generator written in Python. Usually, when you hear “static site generator” you think of Jekyll. Jekyll is the static site generator people know of - the major reason for that being that it is used behind the scenes for Github Pages.

Jekyll is written in Ruby, however, and I have not put enough time into Ruby to be more familiar with it than exchanging some lines in existing code here and there. Python is my tool of choice and when a friend mentioned Pelican I was immediately hooked - even though it took me many months to finally put my plans into motion.

Back in the days: WordPress

WordPress had always struck me as being built for ease of use. It is heavyweight, can be deployed almost everywhere and its features are plentiful. There was one major pain point for me though: For a reason I have never figured out, none of the available native clients (e.g. Blogo, Marsedit) ever managed to show me more than my last few posts instead of a full view of all historical ones.

I frequently edit posts in the days after they are published. I fix typos, update the wording if I think it is bad after reading it again and sometimes add additional information. I consider publishing an article a bit like writing software or configuring a system. It often needs a little adjustment after it has been in use (or in testing) for some time. With WordPress that meant I had to go to the admin page every time to change something. The workflow was something akin to:

  • go to bookmarked login site
  • swear about login being insecure due to missing TLS deployment
  • log in
  • go to section “posts”
  • find the post in question
  • edit the post by copying the modified content from my local file to the website
  • preview the post on the site
  • save the post

I dislike the need to look for click targets, to scan for the relevant article in the list, the waiting between interactions on a slow connection. The setup screamed for some sort of automation but nothing seemed easy to set up at that point.

Uploading Pelican

Immediately after switching to Pelican for content generation, I found myself in the puzzling situation of having a blog but no easy way to publish it. A bit of investigation uncovered Pelican shipping with a Makefile that includes a ftp_upload target though. I configured this and added a ~/.netrc file so I didn’t need to type my password every time an upload was performed. This worked fine for a while. I even wrote a little bash aliases to run it.

source ~/.virtualenvironments/pelican/bin/activate \
  && cd ~/…/ghostlyrics-journal/Pelican \
  && make ftp_upload \
  && deactivate \
  && cd - \
  && terminal-notifier -message "GhostLyrics Journal published." -open "

It was in May 2016 that the lftp build for macOS broke. That means that after an upgrade of macOS I was left without a way of easily deploying changes to the blog. Pelican uses lftp because of some of its features like mirroring a local folder and updating only the differences instead of copying the whole folder recursively every time you kick it. I think I tried to publish with Transmit once or twice but it is simply not built for this task.

I was enormously frustrated and heartbroken. I didn’t write anything for weeks, instead hoping a solution would surface that didn’t require engineering effort on my part. However, the build remained broken and so did my FTP upload.

After being inspired I decided that the status quo wasn’t acceptable and went on to build a way that allowed me to simply run publish in Terminal and have everything done for me - reproducibly and rock solid.

Up comes Vagrant

In October 2016 I came up with a Vagrantfile that allowed me to publish from an Ubuntu machine via Vagrant. This worked around the author of lftp seemingly having little interest in building for macOS.

Vagrant.configure("2") do |config| = "bento/ubuntu-16.04"
  config.vm.synced_folder "/…/ghostlyrics-journal", "/pelican"

  config.vm.provision "file", source: "~/.netrc", run: "always", destination: ".netrc"

  config.vm.provision "shell", env:{"DEBIAN_FRONTEND" => "noninteractive"}, inline: <<-SHELL
    apt-get -qq update
    apt-get -qq -o=Dpkg::Use-Pty=0 install -y --no-install-recommends \
      make \
      python-markdown \
      python-typogrify \
      python-bs4 \
      python-pygments \
      pelican \

  config.vm.provision "shell", privileged: false, run: "always", inline: <<-SHELL
    make -C /pelican/Pelican ftp_upload

In short: I use a bento Ubuntu box because I’ve had bad experience on multiple occasions with the boxes in the Ubuntu namespace. I sync the folder my blog resides in to /pelican in the VM. I copy the .netrc file with the credentials. The VM gets some packages I need to run Pelican and calls the ftp_upload make target. This also got a new bash alias.

cd ~/vagrant/xenial-pelican \
  && vagrant up \
  && vagrant destroy -f \
  && cd - \
  && tput bel

Now, if you only ever publish a few times, this works fine and is perfectly acceptable. If you intend to iterate, pushing out changes a few times within half an hour, you’ll be stuck waiting more often than you’d like due to the VM booting and reconfiguring. This was necessary to avoid conflicts when I work on different machines with the Vagrantfile being in my Dropbox.

Wrapping it up with Docker

Enter Docker. Now I know what you are thinking: “Docker is not the solution to all our problems” and I agree - it is not. It seems like the right kind of tool for this job though. Being built on xhyve and therefore Hypervisor.framework it is decidedly more lightweight than Virtualbox. When it is already running, firing up a container that builds the blog, uploads it and shuts the running container down again is very, very fast.

I built the following Dockerfile with the command docker build -t pelican . while in the directory containing the Dockerfile and .netrc.

FROM buildpack-deps:xenial
LABEL maintainer="Alexander Skiba"

VOLUME "/pelican"
WORKDIR /pelican
ENV DEBIAN_FRONTEND noninteractive

ADD ".netrc" "/root"

RUN apt-get update \
 && apt-get install -y --no-install-recommends \
      make \
      python3-pip \
      python3-setuptools \
      python3-wheel \

RUN pip3 install \
      pelican \
      markdown \
      typogrify \
      bs4 \

CMD ["make", "-C", "/pelican/Pelican", "ftp_upload"]

Again, I build on top of a Ubuntu Xenial machine, work in /pelican, copy the .netrc file and install packages. This time, however I install the packages via pip to get current versions. It is also of note that while building the image, one does not have access to files outside of the current directory and its subdirectory, which made a local copy of .netrc necessary. Furthermore, the paths for Docker volumes cannot be defined in the Dockerfile by design. Because of that, the new bash aliases is this:

docker run -v /…/ghostlyrics-journal/:/pelican pelican

This short command starts the container called pelican with the given folder mounted as volume into /pelican. Since I don’t specify interactive mode, the CMD defined earlier is called and the blog built and uploaded. Afterwards the container exits since the command itself exits. Quite an elegant solution, I think.

Final Fantasy XIV: Stories about Fellowship

Posted on Sat 05 August 2017 • Tagged with Stories, Video Games

I started playing Final Fantasy XIV (FF) in February when my disappointment about the many quirks of Black Desert Online (BDO) reached an all-time high. After feeling that a lot of things were unpolished in BDO I wanted to try an MMO with monthly subscription - the assumption being that the extra money was used for a certain layer of polish and QA that I long for when playing a video game.

I was pleasantly surprised. All the GUIs were fine, not overloaded, no text outside of its intended boxes or similar stuff showing neglect on behalf of the developer. While the beginning of combat is rather boring and depressingly slow, it grows better when you get more skills. The world is build with attention to detail even though I felt that BDO’s world felt more alive, especially Altinova. I want to point out that the writing is superb. The jokes, pop culture references and times when the game doesn’t take itself serious are amazing.

When taking pictures in FF I am almost always taking images of events and experiences, even characters whereas in BDO my favorite motive was the environment.

Nadzeya looking at grilled food in Altinova

Another thing I realized early on is how the game is build to foster community and friendliness. There are systems in place to help new players (Novice Chat), that encourage players to play older content with others (dungeon bonuses, second chances for Khloe’s Wondrous Tails) and to be generally helpful and cooperative while in an instance (player recommendations). All this is just so fundamentally different from the dog-eat-dog mentality in BDO where you can basically get stabbed outside safe zones with little to no repercussions for the murderer.

Let me tell you about Kakysha Saranictil, a rogue and ninja fighting for the good of the people of Eorzea. She is a hero both to the common folk as she is to statesmen. Fighting for the right cause is reason enough for her to help everyone, be them a poor miner in a almost forsaken village or the ruler of a grand city-state.

An airship leaving Ul'dah

While she started her journey as pugilist (read: martial artist) in Ul’dah, the prosperous desert nation, she soon discovered that her true calling are the shadows and so she became a member of The Dutiful Sisters of the Edelweiss in Limsa Lominsa where she studied under Captain Jacke. As her travels led her all over Eorzea she sadly realized that Jacke had little left to teach her. Gladly Oboro, a ninja hailing from Doma in the Far East took her under his wings and taught her the ways of the ninja.

Kakysha sitting cloaked in Idyllshire

Now, while her comrades at the Scions of the Seventh Dawn certainly kept her busy defending this or that nation from both primal and Garlean threat, she certainly did spent her downtime well, building trust with a more conservative faction of Ul’dah’s lizard people, the Amalj’aa. A proud folk of warriors, they came to respect her when she helped them uphold their traditions again and again against their religious fanatical kin revering Ifrit as well as defending their clanswoman.

Admittedly even a hero needs a little rest from time to time and what better use of said downtime would there be than finally having dinner with her close friend, Ser Aymeric de Borel.

Aymeric laughing about a joke Kakysha made

Kakysha smiling at Aymeric

But even between all those events, she found a sense of belonging, of fellowship. Kakysha joined a Free Company soon after starting her journey, but ultimately felt unfulfilled by both the people and their way of treating each other. After a long period of solitude she ultimately came across the Seraphs, Jenji and Syn Seraph, who invited her to join their Free Company, The Black Crown, where people were pleasant and all was well. While Crown has a considerable amount of adventurers who have failed to show up in recent times there is a core group of heroes who are there to help others, to talk and to have fun with.

Recently one might come to the impression that Kakysha had become complacent, ignoring the plight of her fellow people. Nothing could be further from the truth - it is only that she needed to focus on solving the biggest issues first (namely, the liberation of Doma and Ala Mhigo) before tackling the smaller issues (left over sidequests) now that a manner of peace has been established.

Kakysha watching the people in Quarrymill

Look out for her on the Phoenix, EU server - you’ll know her by her completely sand colored clothes - be they adorned by gems and jewels or more work oriented with belts and pouches, bright red hair and glasses.