Blog

18 Apr 2018 . research .
Organising VisArt @ ECCV'18! Comments

This year I’m very excited to be organising the workshop VISART IV with several other great chairs: - Alessio Del Bue, Istituto Italiano di Tecnologia (IIT); - Leonardo Impett, EPFL & Biblioteca Hertziana, Max Planck for Art History; - Peter Hall, University of Bath; - Joao Paulo Costeira, ISR, Instituto Superior Técnico; - Peter Bell, Friedrichs-Alexander University Nüremberg.

We hope that this year pushes harder the collaboration between Computer Vision, Digital Humanities and Art History. With aims to generate some fantastic new partnerships to be published at this workshop and future ones.

The further bonding is exemplified by the new track to allow DH and Art History to join the conversation about what they are doing with Computer Vision and how we can help them in the future. I’m very excited to see what gets submitted.

So here is the Call for Papers, enjoy!


VISART IV “Where Computer Vision Meets Art” Pre Announcement


4th Workshop on Computer VISion for ART Analysis In conjunction with the 2018 European Conference on Computer Vision (ECCV), Cultural Center (Kulturzentrum Gasteig), Munich, Germany


IMPORTANT DATES Full & Extended Abstract Paper Submission: July 9th 2018 Notification of Acceptance: August 3rd 2018 Camera-Ready Paper Due: September 21st 2018 Workshop: 9th September 2018


CALL FOR PAPERS

Following the success of the previous editions of the Workshop on Computer VISion for ART Analysis held in 2012, 2014 and 2016, we present the VISART IV workshop, in conjunction with the 2018 European Conference on Computer Vision (ECCV 2018). VISART will continue its role as a forum for the presentation, discussion and publication of computer vision techniques for the analysis of art. In contrast with prior editions, VISART IV will expand its remit, offering two tracks for submission: 1. Computer Vision for Art - technical work (standard ECCV submission, 14 page excluding references) 2. Uses and Reflection of Computer Vision for Art (Extended abstract, 4 page, excluding references)

The recent explosion in the digitisation of artworks highlights the concrete importance of application in the overlap between computer vision and art; such as the automatic indexing of databases of paintings and drawings, or automatic tools for the analysis of cultural heritage. Such an encounter, however, also opens the door both to a wider computational understanding of the image beyond photo-geometry, and to a deeper critical engagement with how images are mediated, understood or produced by computer vision techniques in the ‘Age of Image-Machines’ (T. J. Clark). Whereas submissions to our first track should primarily consist of technical papers, our second track therefore encourages critical essays or extended abstracts from art historians, artists, cultural historians, media theorists and computer scientists.

The purpose of this workshop is to bring together leading researchers in the fields of computer vision and the digital humanities with art and cultural historians and artists, to promote interdisciplinary collaborations, and to expose the hybrid community to cutting-edge techniques and open problems on both sides of this fascinating area of study.

This one-day workshop in conjunction with ECCV 2018, calls for high-quality, previously unpublished, works related to Computer Vision and Cultural History. Submissions for both tracks should conform to the ECCV 2018 proceedings style. Papers must be submitted online through the CMT submission system at:

https://cmt3.research.microsoft.com/VISART2018/

and will be double-blind peer reviewed by at least three reviewers.

TOPICS include but are not limited to:

  • Art History and Computer Vision
  • 3D reconstruction from visual art or historical sites
  • Artistic style transfer from artworks to images and 3D scans
  • 2D and 3D human pose estimation in art
  • Image and visual representation in art
  • Computer Vision for cultural heritage applications
  • Authentication Forensics and dating
  • Big-data analysis of art
  • Media content analysis and search
  • Visual Question & Answering (VQA) or Captioning for Art
  • Visual human-machine interaction for Cultural Heritage
  • Multimedia databases and digital libraries for artistic and art-historical research
  • Interactive 3D media and immersive AR/VR environments for Cultural Heritage
  • Digital recognition, analysis or augmentation of historical maps
  • Security and legal issues in the digital presentation and distribution of cultural information
  • Surveillance and behaviour analysis in Galleries, Libraries, Archives and Museums

INVITED SPEAKERS

  • Peter Bell (Professor of Digital Humanities - Art History, Friedrich- Alexander University Nüremberg)
  • Bjorn Ommer (Professor of Computer Vision, Heidelberg)
  • Eva-Maria Seng (Chair of Tangible and Intangible Heritage, Faculty of Cultural Studies, University of Paderborn)
  • More speakers TBC

PROGRAM COMMITTEE

To be confirmed.

ORGANIZERS: Alessio Del Bue, Istituto Italiano di Tecnologia (IIT) Leonardo Impett, EPFL & Biblioteca Hertziana, Max Planck for Art History Stuart James, Istituto Italiano di Tecnologia (IIT) Peter Hall, University of Bath Joao Paulo Costeira, ISR, Instituto Superior Técnico Peter Bell, Friedrichs-Alexander University Nüremberg


30 Nov 2017 . running .
A lesson in stopping Comments

For a long time now I have pushed my self to run further, faster and harder; this year has been the pinnacle of that with some of my heaviest running months and one of my lowest. Resulting in running over 1000miles in a year, an enormous unplanned achievement. But being plagued by persistent injuries of many varieties, I pushed on and didn’t listen to the needs of my body. Hoping and praying, that it wasn’t serious. Amazingly I have survived through, but as the year comes to a close, I’m doing a conservative Advent Running period and slowing down to enjoy time with my dog, Pixel.

After running (and walking) Lisbon marathon I came back with an aim to do Pisa marathon just two months later to be confident in saying I had run a marathon this year. It is incredible how psychologically I had convinced myself that this was my aim for the year and I would push myself to achieve it. But a terrible tumble pushed my multiple times already injured ankle over the top.

A month off, scheduled physiotherapy and taking recovery very slow, I’m carefully getting back into running. But on reflection over the year I can see the clear mistakes I have made. I’m not a professional runner, it is a hobby, marathons are great goals but times are not. My best runs in training were when I wore the GPS watch but actively avoided looking at it. Over 20miles and not caring about pace is lovely especially when you are running down the coast of Italy.

So the plan for next year is to do smaller more frequent goals, recovery permitting. To stop running to a time and to start running to explore and with the awesome people of the running community. It seems a little early to set goals for next year, but a month off running has given me a lot to reflect on.

Run wise!


25 Apr 2017 . research .
Texture Stationarization: Turning Photos into Tileable Textures Comments

I’m very proud to announce our recent paper at Eurographics 2017 in Lyon today the spotlight was showcased, with narration by Joep Moritz. If you didn’t get the chance to see it, an extended version is now available on YouTube.


06 Apr 2017 . research .
Leaving UCL meal with Tim's group Comments

After being with Tim Weyrich’s group for almost a year and a half yesterday we had our final group lunch and a cheeky beer. It has been great working with everyone at UCL, not just in the immediate group and in that vain more shenanigans to come.


26 Aug 2016 . research .
Transferring old book image style to real world photos Comments

Style transfer has become a popular area of research and with public applications such as Prisma based on Neural Style Transfer [Gatys'15]. Earlier this week I wanted to answer the question does it really work for understanding the larger style and context. In contrast to [Wang'13] where they learnt an artistic stroke style, how does it compare. The British Library Flickr 1m+ dataset [BL'13] provides an interesting application of this, where there is an inherent style for transferring.

So having read the papers around this previously, I was fairly sure it would not transfer very well, but on the chance that local statistics can enforce something coherent it was worth running (and also gave me a chance to play with such networks). So by taking a few examples, these are the best results from transferring from the BL'13 to the Berkeley Segmentation Dataset (BSDS500) [BSDS500'11]

These results were achieved using the Texture Nets method [Ulyanov'16] trained on a singular example and the most visually appealing results displayed after playing with the Texture / Style weights as well as the chosen example. Code available on GitHub

What is quite interesting is if you look at this from a distance they look plausible, but it isn't till you look at one adjacent to another or zoom in you realize that these aren't actually the same style. Logical hatching patterns to describe shadows or depth are ignored, or in the cases where they are present they don't make sense.

As a mini-conclusion, style-transfer, although gaining a lot of hype has still a long way to being accurately reproduced general artistic style. Still the results are interesting and if you aren't looking for exact replication, then it is visually appealing. It must be bared in mind that the British Library dataset is challenging, where the style has been evolving for human understanding over millenniums. A problem to keep working on, possibly guided by transfer learning.

References

[Gatys'15] Leon A Gatys and Alexander S Ecker and Matthias Bethge. "A Neural Algorithm of Artistic Style". Arxiv (http://arxiv.org/abs/1508.06576). 2015.
[Wang'13] T Wang and J Collomosse and D Greig and A Hunter. "Learnable Stroke Models for Example-based Portrait Painting". Proc. British Machine Vision Conference (BMVC). 2013.
[BL'13] British Library Flickr. https://www.flickr.com/photos/britishlibrary/. 2013.
[BSDS500'11] Berkeley Segmentation Dataset. https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/resources.html#bsds500. 2011
[Ulyanov'16] Dmitry Ulyanov and Vadim Lebedev and Andrea Vedaldi and Victor Lempitsky. "Texture Networks: Feed-forward Synthesis of Textures and Stylized Images. Arxiv (http://arxiv.org/abs/1603.03417). 2016.