Home Strategy Activities Grants Publications Technology Transfer Training People Sponsors Blog Contact Us 

José Luis Vázquez-Poletti - Students

Home | Publications | Teaching | Students | El Escorial | Capoeira

 “It's not only education but ideas that will help you survive outside this classroom.”

As vectors of the Future, students need the best technology to bring their incredible ideas to reality while complementing their education. Distributed computing paradigms and in particular cloud computing are a great choice due to their high accessibility and the availability of open source tools. Basically, they open up a world of research possibilities and engenders a fast learning process, allowing the students to develop in a reasonable time projects like the ones that are outlined below.


Mutokamwoyo Cloud

Author: Juan Montiel Cano, Alba Montero Monte, Sergio Semedi Barranco and Adrián Martínez Jiménez (Universidad Complutense de Madrid, Spain)

Juan Montiel Cano, Alba Montero Monte, Sergio Semedi Barranco and Adrian Martinez Jimenez

Context: Degree in Computer Science & Engineering (UCM), co-advised with José Manuel Velasco.

Description: Mutokamwoyo Cloud finds a way to use the new Cloud computing paradigm for volunteer purposes. This project aims to develop and deploy a light cloud system customized for a populated area in Africa. Nowadays communications and internet are essential, the project's final goal lies in providing access to knowledge on useful topics like general education, medicine or entertainment to those people who don't have the possibility.

Outreach: Concurso Universitario de Software Libre.

Deployment and Migration of Business Services in the Cloud

Author: Bartosz Ignaczewski (Lodz University of Technology, Poland)

Context: Degree in Computer Science & Engineering (UCM), Erasmus Programme.

Bartosz Ignaczewski

Description: Cloud solutions in business are nowadays getting more and more popular. Many companies decide to deploy their applications in the cloud or migrate them from their non cloud based solutions. It allows them to focus on funcionalities and turn over the work connected with software distributed architecture and scalability issues to cloud providers. However, they still need to spend a lot of time on building the whole cloud infrastructure. The goal of this work is to create an integrator that makes the communication with cloud provider(s) (Amazon in this case) as easy and quick as possible, aiming to be used even by non-technical users. It supports the full process of application deployment and termination along with cloud-based database management, what drastically lowers the time needed to set up the infrastructure.


Application Execution Optimization in Heterogeneous High Performance Computing Environments

Authors: Richard M. Wallace (Universidad Complutense de Madrid, Spain)

Context: PhD Thesis in Computer Architecture (UCM), co-advised with Daniel Mozos.

Richard M. Wallace, his advisors and the evaluation court

Description: High performance computing systems are used as singular, cluster, or cloud resources requiring software to be written treating these resources as homogeneous compute platforms regardless of heterogeneous components. As such, developers rely on source language pragmas, inter-process communication libraries, compiler directives, and link- loader directives to control programs executed on the appropriate computational processor. In practice specific manual tuning must be done. Current work with auto-tuners shows that there is little attention to federated, distributed methods of determining the optimal mapping of execution units across cloud-deployed computational platforms. This research concentrates on models of computation and allocation methods for executable components to be allocated across federated, distributed computation systems constructed from homogeneous and heterogeneous compute elements.


Authors: Robert Slavi Marinov, Pablo Fernández Grado and Francisco Javier Sánchez Platero (Universidad Complutense de Madrid, Spain)

Context: Degree in Computer Science & Engineering (UCM), co-advised with José Manuel Velasco and Eva Ullán.

Robert Slavi Marinov, Pablo Fernández Grado and Francisco Javier Sánchez Platero

Description: This project aims to provide a disruptive way to pay in small businesses being convenient for both the buyer and the seller. Fingerpay is a new secure payment method that exclusively relies in the fingerprint without needing extra devices such as smartphones. Also, security is increased by distributing the user information (data, fingerprints) with cloud computing federation methods.


Author: Piotr Rzeznik (Lodz University of Technology, Poland)

Context: Degree in Computer Science & Engineering (UCM), Erasmus Programme.

Piotr Rzeznik

Description: Logistic companies need smart and fast tools to be competitive players on the transport market. TransLogistic system was designed to fulfil all necessary needs in field of tracking the freightage. Thanks to that client has an ability to follow his shipment order in real-time by the map visualisation (route and location points are being marked). The system is able to register different types of shipment and can be easily used for tracking the current status of parcels and updating it. User scans QR code, which is placed on the container or package, by the mobile device in order to read freight’s identification number and is able to view the current location on the specially designated website. Current route is automatically generated from the logged records. All the location points which belongs to the company are stored in the database amongst the orders and packages' details. Requests are served by the REST web service.


Authors: José Luis Góngora Fernández and Abdallah Fallaha Sabhan (Universidad Complutense de Madrid, Spain)

Context: Degree in Computer Science & Engineering (UCM), co-advised with José Manuel Velasco.

Abdallah Fallaha Sabhan and José Luis Góngora Fernández

Description: Cybersecurity is a field that everytime is more present in our lives with the advance of the technology. Governments, military, corporations, financial institutions, hospitals and other businesses collect, process and store a great deal of confidential information on computers and transmit that data across networks to other computers. With the growing volume and sophistication of cyber attacks, ongoing attention is required to protect sensitive business and personal information, as well as safeguard national security. In the future almost everthing is going to be computer-based so with the advance of the technology new threats will appear, more dangerous and sophisticated. The approach of this project is to demonstrate that with a few knowledge of network security, cloud computing and some coding lines it is possible to implement a powerful attack tool that could represent a serious danger for the integrity and confidentiality of users and institutions.


Authors: Rodrigo Arranz López and Rafael Delgado Meana (Universidad Complutense de Madrid, Spain)

Context: Degree in Computer Science & Engineering (UCM), co-advised with José Manuel Velasco.

Rodrigo Arranz López and Rafael Delgado Meana

Description: At present we are connected the Internet permanently either at home or at work. Anyone can access the Internet quickly and easily from almost any electronic device to search for information, play games, watch multimedia content, etc. The problem is that cyber criminals are taking advantage of people who do not care about their network security. Cyber criminals usually have extensive computer knowledge, but most people have little knowledge about computer security. How can these people defend themselves from cyber criminals? To solve this problem we are developing the project NetworkSafeBox. NetworkSafeBox adds a layer of security to your Internet connection. Without much knowledge about computer security a user can control the incoming and outgoing traffic through a web interface in a few clicks. NetworkSafeBox creates a network of virtual machines that controls Internet connections based on user needs and protects against malicious access to your home or work network.



Author: José María González Alba (Hospital Ramón y Cajal, Madrid, Spain)

Context: Master in Bioinformatics and Computational Biology (ISCIII)

Unai López de Heredia

Description: The central idea of evolutionary theory is that species are related through a history of common descent, any study of DNA sequences sampled from different species or from different individuals in a population is likely to start with a phylogenetic analysis. Phylogenies are used in almost every branch of biology and become an indispensable tool for genome comparisons. The quantity of sequence data produced by Next Generation Sequencing (NGS) technologies and the complexity of the questions in evolutionary biology pose great statistical and computational challenges. Bayesian Evolutionary Analysis by Sampling Trees (BEAST) is a cross-platform program for Bayesian analysis of molecular sequences orientated towards time-measured phylogenies and it is used as a method of reconstructing phylogenies for testing evolutionary hypotheses without conditioning on a single tree topology. The computational speed of the Bayesian method is highly data-dependent. The separate analysis is useful for studying the differences in the reconstructed gene tree; however, it is inefficient for estimating the common phylogeny that underlies all genes. The aim of the present project is to evaluate which is the best combination of CPU, memory, storage, and networking capacity to use in terms of execution time and costs process for Bayesian evolutionary analysis of large genome datasets in a reasonable time frame.


Authors: Iratxe González, Sergio Carvajal, Javier Serrano, Todor Velikov and Daniel García (Universidad Complutense de Madrid, Spain)

Context: Degree in Computer Science & Engineering (UCM), co-advised with José Manuel Velasco.

Todor Nikolaev, Sergio Carvajal, Iratxe González, Daniel García and Javier Serrano

Description: IT-Security is one of the biggest concerns in our nowadays society. Fast development and growth of new technologies which depend on the Internet has made people more concerned about the security of their data, increasing the fear the this data could be easily compromised. One of the actions that are being taken against security flaws is the education and training of experts. This effort is focused on guaranteeing the integrity and confidentiality of the cyberspace and the people who use it. The goal of this project is to create a framework that will allow people to train themselves in cibersecurity. In order to achieve this goal, Ender consists in a cloud where digital warfare scenarios are deployed. These scenarios have different goals and require different attack skills. The user will be able to use his own computer and tools to complete the proposed challenges.

3D visualization and analysis for Big Data using cloud computing GPUs

Author: Pedro Rodríguez (Universidad Complutense de Madrid, Spain)

Context: M.E. in Computer Science (UCM), co-advised with José Manuel Velasco.

Pedro Rodríguez

Description: The presence of technology in our day to day activities, coupled with the reduction in data storage costs, has started the era of Big Data. However, processing and visualizing such quantity of data requires specific software and hardware, and not all the users have the opportunity to access this technologies. On the other side, modern GPUs have very high specifications, and each new generation provides bigger amounts of fast memory and better parallel computing capabilities. The use of this hardware makes easier to work with big amounts of data, however most users lack the specific knowledge needed to work with modern GPUs. Thanks to the newest cloud computing technologies it is possible to provide a web service that will allow every user to analyze, operate and visualize his data as fast as possible using the latest GPU technologies. The web service will get as input the user data, and will provide different ways to work with it. The user will not need any kind of knowledge about the technologies used in the background to work with the data. This project aims to develop a server application capable of analyzing big quantities of data using GPUs specific software. The user will interact with the server application using a browser web application.

CtOS Enabler

Authors: Meriem El Yamri, Rodrigo Crespo and Juan Manuel Carrera (Universidad Complutense de Madrid, Spain)

Context: Degree in Computer Science & Engineering (UCM), co-advised with Eva Ullán.

Juan Manuel Carrera, Meriem El Yamri and Rodrigo Crespo

Description: Smart Cities are, undoubtedly, the near future of technology we approach every day, which can be observed in the profusion of mobile devices among the population, these devices automatize daily life through the use of geolocation and information. Two areas that we intend to unite with CtOS Enabler to create a standard of use that encloses every Smart Cities system and eases the creation of new tools to developers of that kind of software. CtOS provides a low-complexity layer that transforms the interaction between information and positioning in a simple task, thus enabling the possibility of creating a high quality and successful product. The complete functionality resides in the Cloud, allowing its resources to be easily scalable and accessible. The system focuses on qualifying the user to define regions of work where to store information that interacts with their devices when they are within those zones, allowing to perform location-based real time tasks. Also, as an added value, statistical data of service usage is provided to help improve the customer's product. CtOS Enabler is geared for both developers, who can use it as a Platform as a Service (PaaS), or companies, Infrastructure as a Service (IaaS) providers, willing to use the API or acting as intermediaries to provide the service to their own customers, focusing on various highly flexible subscription models, according to the functionality required. CtOS Enabler helps to improve the geolocation capacity of any software, such as location limited chats, a smart parking app, or something more complex like a traffic light and bollards control system for law enforcement and rescue bodies in state of emergency.

Outreach: Smart Cities DataFest Hackaton (IBM Prize), #SUPMVC2, Madrid Venture Café, Startup Programme, El Economista, Te Interesa, La Información, Europa Press, Ingenieros, RedEmprendia, El Referente, Servimedia, Discapnet, Diario Siglo XXI, Emes, La Linterna (COPE), InnovaSpain, Emprendedores, Infocalidad, Crónica Norte, Radio 5 (RNE) and Tribuna Complutense.

Personal AnonyCloud

Authors: Alba Moragrega, Alberto González and Sergio Baños (Universidad Complutense de Madrid, Spain)

Context: M.E. in Computer Science (UCM), co-advised with José Manuel Velasco.

Alba Moragrega, Alberto González and Sergio Baños

Description: This project attempts to solve some of major problems of using nowadays public networks. On one hand, VPN is used mostly for point to point connections. With Personal AnonyCloud, the user will be able to build a VPN network composed of various relay servers. It also increases network security, as the machines composing the network are owned by the used. These machines can be created on demand using public cloud providers, so they can be replaced in case of intrusion. Besides, Personal AnonyCloud offers an intuitive and simple interface, allowing the use of the tool to almost anyone, regardless of the level of technical knowledge about VPNs and security.


Face recognition security system on public cloud infrastructures

Author: Ramón Pintado (Universidad Complutense de Madrid, Spain)

Context: Master in Research in Computer Science (UCM)

Ramón Pintado

Description: Nowadays security is a key issue in our society. Public places with large numbers of people such as airports, train stations or even crowded social events have security systems with video surveillance. This project proposes two models for the implementation of a security system using the emerging technology of facial recognition and equipped with enough power to handle large amounts of data using public cloud technology. The first model, ready for static scenarios like security booths, cash machines or one-way corridors, provides very accurate results by controlling lighting, pose or expression of individuals analysed. The second model is prepared for dynamic environments, such as IP surveillance cameras, in which environment variables are not controlled, acting as an excellent support system of the first model. These models allow the identification of persons whose photographs are included in the large databases of different security agencies in the world. Additionally, thanks to the wide variety of machines offered by public cloud, based on decision tree model is presented, in which the user can easily determine the number of machines he needs depending on the flow of people, maximum time for the identification and monetary cost he is willing to take; the best performance or maximum power.


Author: Unai López de Heredia (Universidad Politécnica de Madrid, Spain)

Context: Master in Bioinformatics and Computational Biology (ISCIII)

Unai López de Heredia

Description: Next Generation DNA/RNA Sequencing studies produce big amount of data that need to be analysed with proper computation infrastructure that is not always available for all research centers or universities. Expressional analysis derived from Rna sequencing (Rna-Seq) of forest trees is particularly challenging, because the massive genome length genome (~23.2 Gbp for loblolly pine) and the absence of reference genomes require specific pipelines to obtain sound biological results. The cloud computing paradigm makes economically affordable the implementation of such pipelines. If an algorithm can be made to run efficiently on many loosely coupled processors, implementing it as a cloud application makes it particularly easy to exploit the resources offered by large utility-computing services, such as Amazon’s Elastic Compute Cloud. The aims of the present project are: 1) to review the tools and pipelines for differential expression analysis using Rna-Seq data; 2) to optimize a pipeline for the analysis of differential expression in non model species with large genomes; 3) to analyse the cost-efficiency of the pipeline in the Amazon’s Elastic Compute Cloud vs. local systems.

EMS in Cloud

Authors: Alejandra González, Ricardo Champa and Claudia Montero (Universidad Complutense de Madrid, Spain)

Context: M.E. in Computer Science (UCM), co-advised with Manuel Vázquez (Gobierno de Aragón EMS).

Alejandra González, Ricardo Champa and Claudia Montero

Description: In the world of medical emergencies, every second is valuable. Medical teams have mobility needs, and one way to improve their performance is to take advantage of software applications based on cloud systems. This approach makes protocols and tools available to every team who needs them. EMS in Cloud is a Software as a Service system deployed in multiple platforms. The framework provides an economic solution in the way a cloud system works, providing service and continuing care. Even if there is no internet available, EMS in Cloud offers a mobile client that is able to work offline with most of its functionality.

Outreach: HPCwire.


Authors: Tomás Restrepo, Juan Arratia and Arturo Pareja (Universidad Complutense de Madrid, Spain)

Context: M.E. in Computer Science (UCM).

Tomás Restrepo, Juan Arratia and Arturo Pareja

Description: CloudMiner is a project that aims for a better exploitation of the existing hardware, achieving economical benefit through the mining of virtual crypto-currencies. The main idea is to create a cloud-computing resource pool, composed by diverse machines under potentially different architectures. This pool will be monitorized in real-time by the application, enabling the user to start/stop mining at any given point on any of the available machines, also providing information on their current status. Additional options will be available, like adding or removing resources (supported architectures). Artificial Intelligence based on statistics may be used in order to allow automated control of the mining cloud. These statistics are also visible to the user, to aid decision taking.

Outreach: HPCwire.


On-demand secure teleconferencing on public cloud infrastructures

Author: Bernardo Pericacho (Universidad Complutense de Madrid, Spain)

Context: Master in Research in Computer Science (UCM), co-advised with Ignacio Martín-Llorente.

Bernardo Pericacho

Description: One of the main handicaps of migrating applications to the cloud is security. Having no control over how our data travels through the Internet and not knowing who has access to them, make users reluctant to adopt a public cloud migration strategy. Communications over the Internet are a concrete example where users may need to communicate in a secure way not susceptible to eavesdropping or interception, i.e. not having a third party to listen in. In this project, two new secure teleconferencing architectures deployed in the cloud are presented, Teleconferencing Secure Cloud VPN Architecture and Hidden Teleconferencing Server Cloud Architecture, to address possible security breaches and attacks in voice over IP (VoIP) communications in an unsafe environment such as the Internet. In addition, a model represented by a decision tree is proposed, in which a user can easily determine which architecture and infrastructure is the one that best fits his needs when establishing a secure teleconference server on a public cloud.

Outreach: HPCwire.


Authors: Luis Barrios, Adrián Fernández and Samuel Guayerbas (Universidad Complutense de Madrid, Spain)

Context: M.E. in Computer Science (UCM), co-advised with José Antonio Martín H..

Adrián Fernández, Samuel Guayerbas and Luis Barrios

Description: This project aims to provide virtual lab machines that can be accessed from any available campus PC in which the both hardware and software requirements are minimal. An on-demand and centralized distribution of these services like that proposed by CygnusCloud reduces the effects of budget cuts in education as students could use cheaper computers with less energy consumption. The proposed solution increases the academic progress as it optimizes the use of non-specialized computer labs and reduces costs as it relies totally on open source software.

Outreach: HPC in the Cloud, Cadena SER, Blog, Buenas Noticias (UCJC), Concurso Universitario de Software Libre and Premio Proyecto Fin de Carrera 2012-2013 itSMF.


Authors: Ailyn Baltá, Javier Bachrachas and César Cayo (Universidad Complutense de Madrid, Spain)

Context: M.E. in Computer Science (UCM), co-advised with José Antonio Martín H..

César Cayo, Javier Bachrachas and Ailyn Baltá

Description: SmartCloud is focused on getting the most value out of existing infrastructure, as well as providing a service to UCM researchers or any member of an academic environment. The main idea is to host virtual machines on computers in a LAN that aren't currently in use. These machines don't need to reside in a computer laboratory. Users from the administrative staff also qualify to be in the SmartCloud resource pool. These machines could handle several guest types, which would serve as processing nodes inside a computing cluster or even separate machines for interactive access. One of the approaches is to rely on iPython for providing the researchers access to a powerful Matlab-style web notebook, which would be linked to their Google Drive accounts, sharing their data easily.

Outreach: HPC in the Cloud, Cadena SER, Buenas Noticias (UCJC) and Jornadas de Paralelismo 2013 - Congreso Español de Informática 2013 (Madrid, Spain).


Authors: Guillermo Marco (Sistemas Genómicos, Spain)

Context: Master in Bioinformatics and Computational Biology (UCM)

Guillermo Marco

Description: The environment chosen for this project is Sistemas Genómicos, an SME offering genetic analysis and bioinformatic services. This company had a limited computing environment due to the high demand of running applications. Unfortunately, this demand is not continuous and in the current economic situation scaling through the purchase of new equipment was not a valid option. As cloud computing allows among other things the providing of resources on demand - pay only for what you use - my student saw a great opportunity when his employer signed a partnership with T-Systems. The first objective of the project is to implement a computing prototype that will expand the local cluster infrastructure into the cloud. The second objective is to calculate both economic and computational costs of certain bioinformatic applications in the cloud, in order to maximize the efficiency of the chosen cloud configuration.

Outreach: HPC in the Cloud, Cadena SER and Buenas Noticias (UCJC).



Authors: Isabel Espinar, Adrián Escoms and Esther Rodrigo (Universidad Complutense de Madrid, Spain)

Context: M.E. in Computer Science (UCM)

Isabel Espinar, Adrian Escoms and Esther Rodrigo Description: This system provides a solution for startups with a low budget that could rely solely on on-demand resources provided by a public cloud infrastructure. A development phase of a given product has specific software requirements that can be restrictive, so the developers need a tool to easily deploy and (re)configure the necessary machines. Also, developers need a standard entry point to the development platform, allowing them to change the context as fast as possible. On the other hand, the startup needs to calculate the exact development cost in order to assign a competitive price to the final product.

Outreach: HPC in the Cloud and Tribuna Complutense


Authors: Gonzalo Santana Rodríguez (Museo de Ciencias Naturales, Spain)

Context: Master in Bioinformatics and Computational Biology (UCM)

Gonzalo Santana Rodríguez Description: This system aims to understand the effects of climatic elements on the evolution, migration and extinction of species and biodiversity. This is accomplished with complex mathematical models that generate physiologic responses from the studied species according to climatic data. Survival odds for each species are then obtained for a given area. The resulting system verifies if biodiversity in mountains is higher than in other zones or if the climatic change forces species to migrate to other zones where their survival odds are higher. Focusing on a lower level, the parallel application can be executed in both CPU and GPU resources provided by local and cloud infrastructures. For this reason, the OpenCL API was chosen.

Outreach: HPC in the Cloud and Tribuna Complutense



Authors: Alberto Megía Negrillo, Antonio Molinera Lamas and José Antonio Sánchez (Universidad Complutense de Madrid, Spain)

Context: M.E. in Computer Science (UCM)

Alberto Megía Negrillo, Antonio Molinera Lamas and José Antonio Sánchez Description: This system takes the most advantage of Cloud computing and parallel programming in order to factorize very big integers, the real basis of RSA cryptosystem‟s security, by executing different mathematic algorithms like trial division and quadratic sieve. The combination of the Amazon‟s public Cloud infrastructure and private servers has been chosen to achieve this goal, since both make possible to reach optimum results in terms of time and cost. One of the most relevant characteristics of this programming project is that the system has been designed into modules, starting with the simulation software “Forecaster”, whose purpose is to estimate the required processing time and the cost on the Amazon‟s Cloud infrastructure. Another module called “Engine” has been developed to factorize RSA keys thanks to parallel programming properties. Its aim is to connect a server network and to generate and assign computer operations without any interaction by the user. Furthermore, a graphic representation application, “Codeswarm”, has been included as a module into the system, to visualize interactions between servers that perform the factorization task.

Outreach: HPC in the Cloud, Tribuna Complutense and Jornadas de Paralelismo 2011 (Tenerife, Spain)


GridWay Graphical User Interface

Author: Srinivasan Natarajan (California State University at Sacramento, USA)

Context: Google Summer of Code 2009

Srinivasan Natarajan Description: Current Gridway environment does not provide users with a Graphical interface for monitoring and submitting jobs. The changes can be made using GTK+ or The GIMP Toolkit which will use the existing infrastructure of DRMAA API (C bindings). The existing command line interfaces are supported in Graphical User Interface so that users are able to compose, manage, synchronise and control their jobs just by clicking the graphical interface instead of the commands.

Outreach: Google Open Source Blog and GridCast


Admin · Log In