Assistant Professor in Visual Computing at Durham University. Stuart's research focus is on Visual Reasoning to understand the layout of visual content from Iconography (e.g. Sketches) to 3D Scene understanding and their implications on methods of interaction. He is currently a co-I on the RePAIR EU FET, DCitizens EU Twinning, and BoSS EU Lighthouse. He was a co-I on the MEMEX RIA EU H2020 project coordinated at IIT for increasing social inclusion with Cultural Heritage. Stuart has previously held a Researcher & PostDoc positions at IIT as well as PostDocs at University College London (UCL), and the University of Surrey. Also, at the University of Surrey, Stuart was awarded his PhD on visual information retrieval for sketches. Stuart holds an External Scientist at IIT, Honorary roles UCL and UCL Digital Humanities, and an international collaborator of ITI/LARSyS. He also regularly organises Vision for Art (VISART) workshop and Humanities-orientated tutorials and was Program Chair at British Machine Conference (BMVC) 2021.
Exploring Visual Reasoning to bring higher-level reasoning and knowledge to answer the questions of society and culture. Supporting the MEMEX RIA H2020 EU Project and RePAIR FET H2020 EU project to achieve their objectives.
Working under Dr Alessio Del Bue
Exploring Visual Question and Answering in relation to geometry for scene understanding. More to come about this project as it is published.
Working under Prof. Tim Weyrich
Continued work on Cultural Heritage Vision and Graphics applications.
Working under Prof. Tim Weyrich
This project explores two different aspects of Digital Heritage analysis of artefacts and printed illustrations. Firstly the 3D reconstruction of artefacts, specifical bas-relief, provides challenges to traditionally computer vision methods. By utilising and developing new combinations of multiview photometric stereo and structure from motion we aim to reconstruct British Museum assets. Secondly exploring the analysis of illustrations of historical books provided by the British Library we develop an approach to segment sparse line structures with learnt shading styles.
Working under Dr John Collomosse
An EPSRC funded project exploring the digital effect on key life transition points, through Social Media. The project required the development of algorithms for classification and clustering utilising both Image and Text. Additional explored data purification of large noisy social media datasets using Genetic Algorithms. As well as a structured manifold mapping techniques for presentation of user data through a 2D game interface.
Working under Dr John Collomosse
Funding for research in to Visual Information Retrieval.
Working under Mr Michael Anthony
Providing Computer Support for a Small Business. Demonstrated through Developing reliable systems with strong backup and recovery policies. Work at JCS Technology involved setting up servers and network infrastructure and support for a variety of platforms Window, Linux and bespoke platforms. The role provided the opportunity to work within a budget and make key decisions on the day to day operating of the business.
Chair | Virtual
Organiser (& Presenter) | Virtual
Program Chair | Virtual
Chair | Virtual
Chair | Munich
Media retrieval has been dominated by text-based queries utilising meta-data tags, but such queries are cumbersome to describe the appearance and in the case of video temporal information. We propose methods using sketch as an intuitive way to describe and search such media content. Sketch based Video retrieval has traditionally applied complex model fitting, in contrast, we explore representations suitable for index structure to achieve sublinear query time. Which also makes it possible to get the user in the loop through relevance feedback. Secondly, we propose Sketch based Human Pose Retrieval (SBHPR), a method of finding humans postures within videos using stickman depictions. Developing a manifold based retrieval method and learning a domain adaptation to improve precision on new videos. Finally, we extended the SBHPR method to a storyboard allowing a sequence of pose and action labels (run, jump) to be intertwined. This is demonstrated for video segment retrieval and synthesis of a new video, by extending the motion graph technique.
Dissertation - Fluid dynamics Simulation interacting with rigid body objects using Smoothed Particle Hydrodynamics based on Mullers algorithm. Other notable projects - Sony PSP Student Development Kit
European Union | Horizon Europe Lighthouse | Budget: € 5.0 mil
January 2023 - December 2025
The vision of the BoSS project is to demonstrate and archive solutions for climate neutrality with a particular focus on coastal cities as an interface to healthy seas, ocean and water bodies envisioning a new triangle of sustainability, inclusion, and design focused on the most important global natural space. The BoS will offer opportunities to engage with communities for an environmentally sustainable, socially fair, and aesthetically appealing transition. Seven lighthouse demonstrators, located in four different regions and aquatic ecosystems in Portugal (estuary), Italy (lagoon and gulf), Sweden/Germany (strait / north sea / river), and the Netherlands/Belgium (delta) will showcase the transformational and uptake impact at the EU level serving as lighthouse pilots for the implementation of Horizon Europe mission objectives and showcase innovative solutions. The seven pilots will all provide tangible examples of mission-oriented approaches that are impactful, measurable, and targeted. The action plan includes the deployment of ""drops"" in all pilots designed to generate ""ripple"" effects at the local (demonstrator) level but then also at the city/region levels (demonstrating effects of scale) and at a broader level (demonstrating the replication. The BoSS, therefore, introduces an ecocentric narrative both cosmopolitan and rooted in nature-based solutions, plural, and testimonial, proposing to apply a design approach to complex socio-technical-ecological and more-than-anthropocentric problems. An agenda that moves from fixing to caring, from growth to nurture, from certainty to contingency, will enable designers, architects, and engineers to think about assemblages instead of systems and change the outcome from extinction to precarious flourishing. The design of these interactions generates the emergence of new aesthetics and, most decisively, a critical awareness of the history, contemporary, and future: designing beyond humans as a way to sustain our future.
European Union | Horizon Europe Twinning | Budget: € 1.1 mil
December 2022 - November 2025
The EU-funded DCitizens project focuses on sustainability and resilience in public service delivery based on innovative technology and participation of all stakeholders involved in digital civics. Researchers propose ways to bind IT research & innovation, government bodies, private service providers and local communities to shape up a new model to handle citizen relationship with their local and state governments. By supporting digital civics’ research communities and policy and lawmakers, training staff and encouraging twinning partnerships, DCitizens lays the foundations for a more citizen-centred approach on public service design and delivery.
European Union | Horizon 2020 FET Open | Budget: € 3.5 mil
September 2021 - February 2025
The physical reconstruction of shattered artworks is one of the most labour-intensive steps in archaeological research. Dug out from excavation sites are countless ancient artefacts, such as vases, amphoras and frescoes, that are damaged. The EU-funded RePAIR project will facilitate the reconstruction process to bring ancient artworks back to life. Specifically, it will develop an intelligent robotic system that can autonomously process, match and physically assemble large fractured artefacts in a fraction of the time required by humans. This new system will be tested on iconic case studies from the UNESCO World Heritage Site of Pompeii. It will restore two world-renowned frescoes, which are in thousands of broken pieces and currently in storerooms.
European Union | Horizon 2020 Research and Innovation Action (RIA) | Budget: € 4 mil
December 2019 - November 2022
The future of our cultural heritage is augmented thanks to inclusive digital storytelling tools. Memories will be intertwined with physical places, locations and objects to promote social cohesion. This is the aim of the EU-funded MEMEX project. It will create assisted augmented reality experiences in the form of stories that intertwine the memories of participating communities. It will develop techniques to (semi-)automatically link images to location and connect to a new open-source knowledge graph that will facilitate assisted storytelling. MEMEX will focus on Barcelona's migrant women. It will also throw the spotlight on residents in Paris' XIX district (home to one of the city's largest immigrant communities) and on second- and third-generation Portuguese migrants in Lisbon.
Vaibhav Bansal, Stuart James, Alessio Del Bue
Mohamed Dahy Elkhouly, Stuart James, Alessio Del Bue
Festival Costituzione: Cultura, Ricerca, Scientifica, E Tecnica | San Daniele, Italy
La società crea continuamente archivi digitali tramite la digitalizzazione di vecchi contenuti o nuovi post nelle piattaforme “social”, dove il mezzo dominante è quello visuale, sia esso immagine o video. L’abilità di sfruttare vantaggiosamente questi archivi dipende dalla capacità delle macchine di filtrare, cercare e rappresentare la vasta quantità di dati utilizzando algoritmi di analisi. Questi algoritmi ci consentono di contemplare tutte le nostre attività in rete a un livello superiore, o addirittura ri-animare video di balli archiviati con nuove coreografie. Ora, entrando nell’era dell’Intelligenza artificiale, la nostra abilità di riflettere sui dati relativi alla cultura non potrà che aumentare.
University of Adelaide | Adelaide, Australia
In this talk, we look at understanding the relationships between objects within a 3D scene. Firstly, we present our latest paper on using multi-view information to construct a scene graph of objects guided by the layout of ellipsoids. Our ellipsoid nodes coupled with object nodes act as proxies allowing relationships 'same-set', 'part-of', 'same-plane' and 'support' to be inferred by message passing over the graph. We build an architecture that can support such geometric nodes, object nodes and relational nodes merged using within an RNN framework. Secondly, we show how a question about the layout of a scene can be directly answered using RGBD. Using a depth branch guided by region proposals, inferred from the RGB, we show how encoding the relationships between regions provides the necessary support to improve answer prediction. We evaluate over new datasets designed for the VQA depth problem.
University College London | London, UK
Humans have an innate ability to communicate visually; the earliest forms of communication were cave drawings, and children can communicate visual descriptions of scenes through drawings well before they can write. Drawings and sketches offer an intuitive and efficient means for communicating visual concepts. Today, society faces a deluge of digital visual content driven by a surge in the generation of video on social media and the online availability of video archives. Mobile devices are emerging as the dominant platform for consuming this content, with Cisco predicting that by 2018 over 80% of mobile traffic will be video. Sketch offers a familiar and expressive modality for interacting with video on the touch-screens commonly present on such devices. This presentation contributes several new algorithms for searching and manipulating video using free-hand sketches. We propose the Visual Narrative (VN); a storyboarded sequence of one or more actions in the form of sketch that collectively describe an event. We show that VNs can be used to both efficiently search video repositories, and to synthesise video clips. First, we describe a sketch based video retrieval (SBVR) system that fuses multiple modalities (shape, colour, semantics, and motion) in order to find relevant video clips. An efficient multi-modal video descriptor is proposed enabling the search of hundreds of videos in milliseconds. This contrasts with prior SBVR that lacks an efficient index representation, and take minutes or hours to search similar datasets. This contribution not only makes SBVR practical at interactive speeds, but also enables user-refinement of results through relevance feedback to resolve sketch ambiguity, including the relative priority of the different VN modalities. Second, we present the first algorithm for sketch based pose retrieval. A pictographic representation (stick-men) is used to specify a desired human pose within the VN, and similar poses found within a video dataset. We use archival dance performance footage from the UK National Resource Centre for Dance (UK-NRCD), containing diverse examples of human pose. We investigate appropriate descriptors for sketch and video, and propose a novel manifold learning technique for mapping between the two descriptor spaces and so performing sketched pose retrieval. We show that domain adaptation can be applied to boost the performance of this system through a novel piece-wise feature-space warping technique. Third, we present a graph representation for VNs comprising multiple actions. We focus on the extension of our pose retrieval system to a sequence of poses interspersed with actions (e.g. jump, twirl). We show that our graph representation can be used for two applications: 1) to retrieve sequences of video comprising multiple actions; 2) to synthesise new video sequences by retrieving and concatenating video fragments from archival footage.
University of Surrey | Surrey, Portugal
University of Surrey | Surrey, UK
INESC-ID | Lisbon, Portugal
University of Manchester | Manchester, UK
Additional Honers - BMVA Summer School Poster Competition Runner Up Award