Joint JKU/UAS International PhD Program in Informatics

The Joint JKU/UAS International PhD Program in Informatics supports international (typically non-German-speaking) PhD students at Johannes Kepler University Linz (JKU) and University of Applied Sciences Upper Austria (UAS), Campus Hagenberg with most aspects of successfully finishing a PhD at JKU. Students are integrated in the normal Doctorate Degree Program in Technical Sciences at the Faculty of Engineering and Natural Sciences. The normative length of the program is 6 semesters (3 years), although students may take longer if required.


Land Oberösterreich and UAS co-sponsor funding for this PhD program in the form of stipends for international PhD students. Students are granted the personnel cost rates defined by FWF for Doctoral candidates (which correspond to University employment of 30h/week employment with the remaining 10h/week intended for independent writing of the material leading towards the PhD thesis).

Applications in 2015

The second round of stipends is currently in an open application phase. International candidates with a Master's degree in computer science or closely related areas may apply for one of the defined research areas listed below. Each research area is associated with one primary supervisor at JKU or UAS with the expectation of selecting a second supervisor based on the specific research topic.

Please send your application including CV, motivation letter, and references to officeatins [dot] jku [dot] at, including the title of the research project and supervisor in the plain text part of the email. Please also submit a project proposal covering your motivation and research interests, and additional material strengthening your application.


  1. Start of application period: August 1, 2015
  2. Application deadline: October 11, 2015
  3. Notification of acceptance: October 31, 2015
  4. Earliest possible start of PhD studies: November 1, 2015


Research topics

  • Engineering in the Cloud – A New Dimension of Collaboration and Cooperation
    Supervisor: Univ.Prof. Dr. Alexander Egyed

    The engineering of systems is unimaginable without software tools.  Engineers use them to capture and analyze engineering problems; specify, implement, test, and maintain engineering solutions, and manage engineering processes. Yet, there is a gap between the needs of independently working engineers and the needs of a collaborative engineering team. The existing tool landscape emphasizes the former. Most engineering tools are single-user applications – often of excellent quality but limited in that they support the works of individuals and not that of a group. However, existing engineering practices place the burden of collaboration on engineers who lack awareness. This problem has been a significant contributor to many high profile engineering failures and it has shaped two beliefs: 1) the future requires integrated engineering tools and 2) tool integration problems can only be overcome through large standardization efforts (covering interoperability, meta models, ontologies, tool chains, and transformations). Collaborative Engineering in a Multi-Tool Environment (DesignSpace) embraces the first belief but rejects the second one. Standardization is very important for engineering but over-standardization forces compromises and favors established routines – both stifling innovation and creativity. The DesignSpace project invisions that engineering strength comes from tool diversity. To achieve a breakthrough in engineering, the next generation of engineering environments does not require better tools but innovative ways on how engineers collaborate using the tools they already have. Tool quality alone cannot guarantee engineering quality. What the existing tool landscape misses is how knowledge flows among engineers and the tools they use. Without this knowledge, engineers cannot effectively visualize the bigger picture, propagate changes among tools, and detect errors. This is known as the tool interoperability problem and it is the most critical software and systems engineering problem today. The DesignSpace bridges the gap between single-user tools and collaborative engineering environments by providing engineers with flexible cross-tool sharing, transformation, linking (traceability), and guidance (e.g. inconsistency detection) to enable multi-user collaboration on an unprecedented scale. In terms of ongoing integration efforts, the DesignSpace is a breakthrough in that it does not affect which tools engineers use or how they use them.

  • Reuse: Mining and Evolving Variability in Software-Intensive Systems
    Supervisor: Univ.Prof. Dr. Alexander Egyed

    With the advent of technologies such as the internet and mobile computing, developing software-based products as one-of-a-kind standalone products is no longer economically feasible. Instead, modern software must be designed to execute in multiple environments (hardware and software platforms) and different contexts (e.g. desktop or mobile), interact with other software products and can even be part of larger software ecosystems.  These and other technological and economic trends are changing the way software is developed from a product-centric perspective to product portfolios of similar software products that are tailored to varying, customer-specific requirements.
    However, efficiently developing software product portfolios is not an easy endeavour.  The most common scenario is when portfolios are reverse-engineered from a diverse pool of existing variants of software products that were created with ad hoc techniques. These techniques are collectively called Clone and Own (C&O), and commonly rely on manual, undisciplined, and generally undocumented development practices.  Not surprisingly, even when dealing with a small number of variants, C&O approaches lead to maintenance problems like inefficient bug fixing, wrong feature updates, duplicated functionality, and redundant and inadequate testing. All these factors inevitably result in low-quality software with performance and functionality faults and limitations, and render software companies unable to cope with fast-evolving functionality requirements or technological advances.
    The main goal of the proposed project is to develop an integrated framework capable of providing scalable, robust, flexible, extensible, and automated support for the effective management of software products portfolios. Our works aims to provide a more formal and methodological footing to C&O practices such that their shortcomings are properly addressed.  The approach that we propose is incremental and has no hefty upfront investment in the production of the first consolidated portfolio compared to traditional product lines. This fact enables adopters to reap benefits early on and continuously.
    The framework envisioned will orchestrate and extend works from several important research areas in software engineering, e.g. automated software repair and search based software engineering that rely on novel algorithms inspired by nature to solve complex engineering problems. The proposed project aims to develop software tools and techniques, and provide adequate guidance and best practices identified through their application in case studies from academia and local industry. The proposed project will based at the Institute for Software Systems Engineering, headed by Prof. Alexander Egyed, at the Johannes Kepler University Linz,  a world-wide leading research institution in software engineering.

  • Model-Driven Engineering to Ensure Correct System Behavior at Runtime
    Supervisor: Univ.Prof. Dr. Alexander Egyed

    Software systems have complex behavior and it is critical to ensure that this behavior conforms to the goals set forth by their customers and users. Take, for example, a video on demand player (VOD) that lets users select and playback movies. Such as system responds to given stimuli for example, it starts playing a movie (response) when a user presses the play button (stimuli). Today, the correctness of such a system is demonstrated through testing but testing is time consuming because one needs to explore all possible ways of using a system with near infinite possibilities. Each test has to be hand written where a developer must decide the responses for every possible combination of stimuli. A very expensive and even error prone task. This work investigates how models can help in validating a system’s correct behavior. Today, developers create models during the initial development activities to help transition a system’s requirements to code but these models are often not used during testing. This is a missed opportunity because models can describe system behavior and this model behavior could also be useful for testing a system at runtime. For example, state machines let developers model valid sequences of stimuli and responses. In context of our VOD example, a state machine could define that the playing of a movie (i.e., pressing the play button) must be preceded by a user selecting a movie beforehand. However, it is not straightforward to use such models to help test a system because models tend to be at a much higher level of abstraction. This work thus needs to solve how to overcome this difference in abstraction and also address problems related to traceability. For example, models can only be used for testing a system if we understand where and how the various model elements are implemented in the system.

  • Popularity Prediction of Multimedia Content in Social Media
    Supervisor: Assoc.Prof. Dr. Markus Schedl

    Usage of social media has experienced an incredible increase during the last couple of years. Nowadays, people create, share, consume, and comment on all kinds of multimedia items through online social networks and platforms. Predicting whether a particular item will become popular or not is a hot topic both in academia and industry. This PhD thesis will approach the problem in a multimodal way, elaborating new techniques to exploit a variety of data sources (e.g., multimedia content descriptors, microblogs, social network structure, or consumption histories), designing computational features that will serve as predictors in machine learning algorithms, and thoroughly evaluating them in comprehensive experiments.

    This PhD thesis will complement and extend research currently conducted at the Department of Computational Perception in the context of the project "Social Media Mining for Multimodal Music Retrieval". Part of that project is the creation of popularity models for music items and subsequently exploiting them to predict the popularity of an artist or a song. The proposed PhD thesis will go several steps beyond, in that content categories will not be limited to the music domain and a wide variety of different clues to predict popularity will be considered to elaborate comprehensive models and algorithms that are capable of accurately estimating whether a multimedia item will become popular or not in the near future. These clues or features will be mined from social media and other related sources. Their acquisition and processing will require exploiting techniques such as network analysis, text and web mining, time series analysis, influential user detection, multimedia processing, machine learning, and others.


    Predicting the Popularity of Online Content, Gabor Szabo and Bernardo A. Huberman, Communications of the ACM, 53(8):80-88, August 2010.
    On Popularity Prediction of Videos Shared in Online Social Networks, Haitao Li, Xiaoqiang Ma, Feng Wang Jiangchuan Liu, and Ke Xu, Proceedings of the 22nd ACM International Conference on Conference on Information and Knowledge Management (CIKM), San Francisco, CA, USA, 2013.
    Twitter-driven YouTube Views: Beyond Individual Influencers, Honglin Yu, Lexing Xie, Scott Sanner, Proceedings of th 22nd ACM International Conference on Multimedia (ACM MM), Orlando, FL, USA, 2014.
    The Lifecyle of a Youtube Video: Phases, Content and Popularity, Honglin Yu, Lexing Xie, Scott Sanner, Proceedings of the 9th International AAAI Conference on Weblogs and Social Media (ICWSM), Oxford, UK, 2015.
  • Collaborative Exploration of Large Music Collections
    Supervisor: Assoc.Prof. Dr. Markus Schedl
    Online digital streaming (e.g., Youtube), web radio (e.g., Pandora), and automatic playlist generation (e.g., Spotify) have enabled music listeners to access virtually all (Western) music in the world. While available music recommender systems provide easy access to these large music catalogues, they suffer from shortcomings such as cold-start, popularity and community biases. As a result, recommended songs are frequently selected from a rather small pool of music pieces which are popular among the users of the respective system.
    In this PhD thesis, we will follow a different strategy to access large music catalogues, namely the collaborative exploration of music collections via graphical user interfaces. In particular, based on content and context descriptors of music pieces, the candidate will research novel methods for scalable clustering of large music  collections (e.g., building upon techniques such as t-distributed Stochastic Neighborbooh Embedding), for visualizing the resulting clusters in intuitive and appealing ways, and for collaborative exploration of the respective music collections to foster the interaction between listeners within these visualizations. The candidate will be able to build upon previous work carried out at the Department of Computational Perception, more specifically, research and development in the context of the nepTune and the Music Tweet Map interfaces.
    The research conducted in this PhD thesis will involve techniques from web mining, (unsupervised) machine learning, information visualization, and human-computer interaction, in particular user interface design. 


    An Innovative Three-Dimensional User Interface for Exploring Music Collections Enriched with Meta-Information from the Web, Knees, P., Schedl, M., Pohle, T., and Widmer, G., Proceedings of the ACM Multimedia 2006, Santa Barbara, CA, USA, 2006.
    Exploring Geospatial Music Listening Patterns in Microblog Data, Hauger, D. and Schedl, M., Proceedings of the 10th International Workshop on Adaptive Multimedia Retrieval (AMR), Copenhagen, Denmark, 2012.
    Adaptive Multimodal Exploration of Music Collections, Lübbers, D. and Jarke, M., Proceedings of the 10th International Society for Music Information Retrieval Conference (ISMIR), Kobe, Japan, 2009.
    Globe of Music - Music Library Visualization using GeoSOM, Leitich, S. and Topf, M., Proceedings of the 8th International Conference on Music Information Retrieval (ISMIR), Vienna, Austria, 2007.
  • Virtual Machine Services for Parallel and Concurrent Programming Languages
    Supervisor: Univ.Prof. Dr. Dr. h.c. Hanspeter Mössenböck

    Modern processors come with an increasing number of cores, and programming languages are challenged to exploit them efficiently. Parallel and concurrent programs require communication and synchronization mechanisms such as message passing, locks, monitors or software transactional memory.
    Many modern programming languages are implemented on top of a virtual machine (VM) such as the Java VM, the Microsoft .NET Common Language Runtime, or the JavaScript VM. These virtual machines provide services such a garbage collection, threading, or deoptimization. Currently, however, they provide only very little built-in support for parallelism and concurrency.
    The goal of this dissertation is to investigate, what kind of VM-level concurrency services are necessary for modern parallel programming languages, how these services interfere with other VM services, and how they can be integrated efficiently into a state-of-the-art VM. Optimizations and new mechanisms should be devised as appropriate.
    The research should be done within the Truffle/Graal VM which is a novel Java VM developed by Oracle Labs on top of the HotSpot VM. Truffle is a self-optimizing interpreter that uses run-time feedback and tree rewriting in order to adjust the abstract syntax tree (AST) of the executing program to the current execution profile. Frequently executed parts of the AST are dynamically compiled to efficient binary code using the Graal compiler. The Truffle/Graal VM provides garbage collection, threading, deoptimization and run-time profiling, but it does not have built-in concurrency services so far.
    We are looking for an international PhD student with a Master's degree in Computer Science and a solid background in compiler and VM technology. The PhD position will be hosted at the Institute for System Software at JKU ( and will be supervised by Prof. Hanspeter Mössenböck. It will be part of the Compiler and JVM Research Group ( which currently consists of 8 researchers from Oracle Labs as well as 1 PostDoc and 3 PhD students from the institute.
  • Identification of Unknown and Modified Peptides in High-Resolution Tandem Mass Spectra
    FH-Prof. Dr. Stephan Dreiseitl

    Recent developments of high resolution mass spectrometers carry great potential for research in biological and health issues. Still, in commonly used approaches mass spectra are identified by database search, i.e., searching for a match in a sequence database of known proteins. This approach lacks the possibility of identifying unknown proteins or unconsidered biologically relevant modifications, especially for poorly studied organisms.
    The aim of this project is to develop algorithms and bioinformatic concepts incorporating de novo identification, blind modification search, and genomics data information for peptide and protein identification in proteomics analyses, exploiting all potential benefits of newly developed machines. A new approach for de novo identification, i.e., for explaining mass spectra in the absence of sequence database information, shall be developed to detect unknown peptides and shall be combined with adequate and fast blind modification search. Alternative splice sites and amino acid substitutions that occur, e.g., due to single nucleotide variants, may also hamper spectrum identification as those peptide compositions are often missing in used protein databases. Discovery of those changes is of high value especially for the emerging field of personalized medicine, e.g., in drug efficiency studies. Thus, information from next generation sequencing data of RNA sequences shall also be included in the spectrum identification approaches to detect sample specific sequence variances.
    We are looking for outstanding students with a degree in Bioinformatics/Computer Science or a related discipline and, if possible, a background in proteomics and mass spectrometry. The candidate will be supervised by Stephan Dreiseitl in collaboration with a co-supervisor at the Johannes Kepler University Linz, Austria, and will be part of the Bioinformatics Research Group ( at the University of Applied Sciences Austria. The university is located in Hagenberg, 20 minutes from Linz, the capital of Upper Austria.
    The project will be performed in close collaboration with the Proteomics Group at the Institute of Molecular Pathology (IMP), Vienna (Head: Karl Mechtler).
  • Self-organizing and Self-adaptive Distributed Evolutionary Algorithms
    Supervisor: FH-Prof. Dr. Michael Affenzeller

    Evolutionary algorithms are usually applied to real-world modeling and optimization problems by using historical data, models and constraints. However, in practice we often face interrelated and interdependent scenarios under changing environments.
    The main aim of this thesis project is to research open-ended evolution and emergent behavior, incremental learning as well as online adaptation or exchange of the involved optimization algorithms. Cooperative self-organizing and self-adaptive distributed algorithms have to be developed integrating knowledge obtained by fitness landscape analysis (FLA) and offline surrogate modeling in order to enable efficient optimization under real-world conditions. Using information gained through FLA and surrogate modelling (which can both be run offline) seems especially interesting for handling dynamic environments as well as for the robust and timely detection of regime shifts. Variants of age-layered population structures (ALPS) seem especially interesting for open-ended evolution because these variants enable a continuous inflow of new genetic diversity.
    The results of the project should be integrated into HeuristicLab (, a powerful open source framework for heuristic and evolutionary optimization methods designed and implemented by the proposing research group. HeuristicLab offers a wide variety of optimization algorithms and problem formulations which shall be hybridized and orchestrated within this PhD project.

    We are looking for outstanding students with a degree in Computer Science or a related discipline and a strong background in computational intelligence, machine learning and C# programming. During your work on the PhD project, you will be part of the Heuristic and Evolutionary Algorithms Laboratory (HEAL,, one of the leading research groups in the field of applied evolutionary computation research. Members of the research group HEAL have implemented and maintain the open source software HeuristicLab, and they are continuously working on several application-oriented research projects with leading industrial companies in Austria. You will be supervised by Michael Affenzeller in collaboration with a co-supervisor at the Johannes Kepler University Linz, Austria. The position is fully funded (comfortably covering living expenses and health insurance) for 3 years.
    HEAL is part of the Department for Software Engineering of the University of Applied Sciences Upper Austria located in Hagenberg, Austria. The campus in Hagenberg is located 20 minutes north of Linz, the capital of Upper Austria, and is easily reachable by car or public transport. Linz is well known for its successful production industry, cultural diversity, and music. Detailed information about the open PhD position can be found at the homepage of the research group

  • Nano-electronics Design Automation
    Supervisor: FH-Prof. Dr.-Ing. habil. Hans Georg Brachtendorf

    The ever increasing demand for realistic simulations of nano-electronic products poses a heavy burden for the researchers working in Electronic Design Automation (EDA). Realistic simulations require that models match the physical behavior of the circuits and devices accurately. Usually, large systems of (linear) equations, which arise e.g. from a huge circuit size and/or from a detailed device model, need to be solved during the simulation. Moreover, besides electromagnetic (EM) behavior also thermal, mechanical couplings as well as analogue/digital interactions in mixed signal designs have to be taken into account.
    In current EU research projects, e.g. the FP7 project nanoCOPS, the circuit simulator LinzFrame developed by the University of Applied Sciences of Upper Austria, has been coupled to a device simulator from the company MAGWEL, Leuven, Belgium as illustrated in Fig. 1. The LinzFrame simulator enables the simulation techniques DC, AC, transient, Harmonic Balance and multirate envelope, the latter a feature required for the simulation of radio frequency (RF) circuits in the GHz or even THz range. MAGWEL’s electromagnetic/device simulator solves the space-time partial differential equations, arising from the device topology and the physical properties of the materials, whereas LinzFrame solves the circuits on the higher abstraction level of networks and lumped device models. The coupled EM/circuit or mixed signal simulation enables the design of novel devices and their test within an integrated circuit (IC) before manufacturing, such as the on-chip inductor depicted in Fig. 3. Hence the time to market for new products can be reduced significantly.
    The new challenges in EDA are addressed both by a continuous increase of computational speed on the one hand and progress in computational and numerical techniques on the other. In coupled simulation one may encounter system sizes of several millions. The co-simulation of both circuit and devices, which takes into account the above mentioned physical effects demands for permanently improved methods and tools to reduce the run time. They can be addressed by, e.g.,

    • Parallelization of the algorithms and model evaluations on multicore CPUs and GPUs,
    • Iterative linear solvers such as preconditioned Krylov subspace methods,
    • Compact model generation employing techniques from Model Order Reduction (MOR).

    Parallelization can be realized for model evaluation and on algorithmic level. The typical devices such as MOS and bipolar transistors, capacitances and inductors can be naturally evaluated in parallel. For these devices there exist models which are industry standards such as the BSIMx models from the University of Berkeley, MOS9 and MOS11, MEXTRAM etc. from the semiconductor company NXP, The Netherlands. NXP is also a partner in several EU projects. On algorithmic level the parallelization can e.g. be performed in solving the huge, but sparse systems by block iterative methods. These iterative methods may be used further as preconditioner for Krylov subspace techniques.
    First attempts for MOR are focused on linear dynamical systems with a single input and single output (SISO).  However, coupling is mainly a multiport problem and naturally the research focus has been directed to Multiple Input Multiple Output (MIMO) systems.  Moreover, semiconductor devices are nonlinear multiport systems by nature. Hence small-signal linear models are not sufficiently accurate enough for the challenges in EDA. Therefore current research interest focuses on MOR techniques for nonlinear multiport dynamical circuits and systems.

    Background in one or more of the following topics required:

    • Excellent programming skills in C, C++,
    • Parallelization of algorithms on multicore CPUs and GPUs,
    • Circuit  design and simulation,
    • Numerical analysis,
    • Master either in Computer Science, Electronics or Applied Mathematics.

    Relation to other research activities:

    • ICESTARS: Integrated Circuit/EM Simulation and design Technologies for Advanced Radio Systems-on-chip;  EU Framework Research Programme FP7  under grant agreement  No.  ICT 214911, 2008-2011.
    • Wavelet based simulation of electronic circuits, FWF Österreichischer Wissenschaftsfond, grant agreement No.  P 22549-N18, 2010-2014.
    • ARTEMOS: Agile RF Transceivers and Front-Ends for Future Smart Multi-Standard Communications Applications; ENIAC  nano-scale.
    • nanoCOPS: Nanoelectronic Coupled Problems Solutions;   EU Framework Research Programme FP7, 2013-2016
    • FULLCHESTS:  Full Chip ESD, SPX and TSV Simulation; EU Framework Research Programme Horizon 2020, submitted.
    • Connected Vehicles:  Europäischer Fonds für Regionale Entwicklung (EFRE), submitted.
    • CART - Connected Scene Understanding for Autonomous Road Transport, EU Framework Research Programme Horizon 2020, submitted.
  • Smart materials for fast & easy interaction for textiles
    Supervisor: FH-Prof. Dr. Michael Haller

    Over the last few years, there has been considerable interest in the area of electronic textiles and smart fabrics that is characterized by the fact that the sensor is integrated into fabrics or garments. In line with the introduction of such e-textiles, we have to find novel interaction possibilities that are beyond simple point-and-click interfaces. On the other side, these e-textile sensors – once integrated cleverly - can also support users while interacting with more embedded devices and provide a unique user experience. The goal of this project is to enhance the interaction space for ubiquitous computing by unlocking the potential of smart fabrics and to develop smart sensing technologies integrated into fabrics.

    Firstly, you have to explore different textile materials that can be used for sensing. Next, you will evaluate different possibilities of how to integrate these materials into textiles (e.g. testing different embroidery patterns for achieving better signals, investigating different material compositions). Additionally, you will integrate these input methodologies in order to test and evaluate the performance in several use cases and application scenarios. Overall, you will address the following three main objectives:

    • Formal evaluation & testing of different textile materials (e.g. semi-conductive yarn) that can be used for sensing.
    • Formal evaluation of how to integrate these sensing materials into fabrics that provide an easy and fast interaction with everyday objects.
    • Implementation & evaluation of different input methodologies on fabrics that provide immediate access with the ubiquitous environment.

    During your PhD, you will be part of the Media Interaction Lab ( The position is fully funded (comfortably covering living expenses and health insurance) for at least 2 years. We are looking for outstanding students with a degree in Computer Science or a related discipline and a strong background in HCI. The Media Interaction Lab is one of the leading Austrian research labs in the area of Human-Computer Interaction & Next Generation User Interfaces. It is part of the Department for Digital Media of the University of Applied Sciences Upper Austria located in Hagenberg, Austria. The lab integrates research and education, providing undergraduate and graduate students with a project-based learning environment. Over the past few years, the research group has mainly focused on exploring, creating, and improving interactive environments. Currently, we are working on a range of projects in collaboration with industry (e.g. BMW, Google [x], LEGO, Microsoft Research, etc.) and academia.
    You will be supervised by Michael Haller in collaboration with a co-supervisor at the Johannes Kepler University Linz, Austria. The campus in Hagenberg is located 20 minutes north of Linz, the capital of Upper Austria, and is easily reachable by car or public transport. Linz is well known for its cultural diversity, including media art (e.g., Ars Electronica), and music.

  • Private Cloud Backup of Electronic Identities (eID)
    Supervisor: Univ.Prof. Dr. René Mayrhofer
    One of the future use cases of mobile devices such as smart phones or smart watches is expected to be electronic identity (eID), i.e. the representation of national identity documents, driving licenses, or even passports on smart phones. Supporting eID in addition to mobile payment allows to fully replace the traditional wallet with applications on general smart phones. In addition to improved usability – by having to carry fewer items –, this use case can potentially improve both security and privacy for end users: By relying on sensors on the smart phone, the eID and a small authentication applet stored in a tamper resistant environment can biometrically authenticate users and therefore make theft significantly harder. Furthermore, privacy can be improved by giving users run-time configurability over their eID applications where they can select which attributes are currently readable by terminals (e.g. when only having to provide proof of age, no other attributes of the eID should be visible). However, putting eID on smart phones makes them an even bigger single point of failure. Losing the mobile device is then no longer an issue of cost and inconvenience, but it could potentially keep users from travelling or generally proving their identity.
    The aim of this research project is to systematically analyze the landscape of data formats, network protocols, cloud services, user authentication methods, and user interaction approaches to support backup and restore of eIDs from mobile devices to remote servers. The inherent problem is that, while available and functional, the mobile device will typically act as the digital representative and primary user interface of the user when accessing arbitrary remote services. Typical authentication methods such as Oauth v1/v2 or passwords stored in a password manager / key store are well-understood and used by many web services to support usable authentication with mobile devices.
    When users need to transition/restore their eID to newly acquired devices, bootstrapping the first authentication to the cloud backup service is the main problem because: a) the authentication needs to be highly secure to prevent identity theft via the restore service; b) users will very rarely use this service (only in case of emergencies such as loss or theft of the main mobile device); and c) due to the high potential for abuse, a centralized database of plain-text eIDs should be avoided in favor of decentralized/federated, encrypted backup of individual eIDs. This combination of attributes implies that users should choose a strong secret key for authentication and encryption of their eID, but will most probably not remember it without assistance when trying to restore.
    Because simply cryptographic approaches will be unlikely to address the whole problem, the research project will be inter-disciplinary. Methods under study include biometric authentication, fuzzy cryptography, multi-factor authentication, dynamic service discovery of backup/restore services, psychological aspects of recall of secrets rarely used, server security measures to increase user trust in cloud services, and others.

    The candidate will be integrated both with the Institute of Networks and Security ( at JKU Linz as well as the Josef Ressel Center for User-friendly Secure Mobile Environments ( into a research group of currently 5 PhD students, 3 post-doc researchers, and 2 professors working on various aspects of security and eID. Industry contacts for practical demonstrators of eID on mobile devices are available, and first prototypes are currently in work. The candidate will therefore be able to build upon working eID concepts and code to focus on the backup/restore problem. Previous experience with cryptographic protocols, biometric authentication, and/or smart phone security is advantageous for the application.

Currently funded theses

  • Muhammad Muaaz: Continuous biometric user authentication on mobile phones
    Supervised by Rene Mayrhofer and Josef Scharinger
    Part of the Josef Ressel Center u'smile research team
  • Murad Huseynzade: Security management for critical infrastructure
    Supervised by Ingrid Schaumüller-Bichl and Jörg Mühlbacher
  • Bogdan Burlacu: Exact Tracing of Evolutionary Search Trajectories in Complex Hypothesis Spaces
    Supervised by Michael Affenzeller and Josef Küng
    Part of the HEAL research team
  • Can Liu and Yan Xu: Smart Formula Editor for Large Interactive Surfaces
    Supervised by Michael Haller