This year I’m very excited to be organising the workshop VISART IV with several other great chairs: - Alessio Del Bue, Istituto Italiano di Tecnologia (IIT); - Leonardo Impett, EPFL & Biblioteca Hertziana, Max Planck for Art History; - Peter Hall, University of Bath; - Joao Paulo Costeira, ISR, Instituto Superior Técnico; - Peter Bell, Friedrichs-Alexander University Nüremberg.

We hope that this year pushes harder the collaboration between Computer Vision, Digital Humanities and Art History. With aims to generate some fantastic new partnerships to be published at this workshop and future ones.

The further bonding is exemplified by the new track to allow DH and Art History to join the conversation about what they are doing with Computer Vision and how we can help them in the future. I’m very excited to see what gets submitted.

So here is the Call for Papers, enjoy!

VISART IV “Where Computer Vision Meets Art” Pre Announcement

4th Workshop on Computer VISion for ART Analysis In conjunction with the 2018 European Conference on Computer Vision (ECCV), Cultural Center (Kulturzentrum Gasteig), Munich, Germany

IMPORTANT DATES Full & Extended Abstract Paper Submission: July 9th 2018 Notification of Acceptance: August 3rd 2018 Camera-Ready Paper Due: September 21st 2018 Workshop: 9th September 2018

CALL FOR PAPERS

Following the success of the previous editions of the Workshop on Computer VISion for ART Analysis held in 2012, 2014 and 2016, we present the VISART IV workshop, in conjunction with the 2018 European Conference on Computer Vision (ECCV 2018). VISART will continue its role as a forum for the presentation, discussion and publication of computer vision techniques for the analysis of art. In contrast with prior editions, VISART IV will expand its remit, offering two tracks for submission: 1. Computer Vision for Art - technical work (standard ECCV submission, 14 page excluding references) 2. Uses and Reflection of Computer Vision for Art (Extended abstract, 4 page, excluding references)

The recent explosion in the digitisation of artworks highlights the concrete importance of application in the overlap between computer vision and art; such as the automatic indexing of databases of paintings and drawings, or automatic tools for the analysis of cultural heritage. Such an encounter, however, also opens the door both to a wider computational understanding of the image beyond photo-geometry, and to a deeper critical engagement with how images are mediated, understood or produced by computer vision techniques in the ‘Age of Image-Machines’ (T. J. Clark). Whereas submissions to our first track should primarily consist of technical papers, our second track therefore encourages critical essays or extended abstracts from art historians, artists, cultural historians, media theorists and computer scientists.

The purpose of this workshop is to bring together leading researchers in the fields of computer vision and the digital humanities with art and cultural historians and artists, to promote interdisciplinary collaborations, and to expose the hybrid community to cutting-edge techniques and open problems on both sides of this fascinating area of study.

This one-day workshop in conjunction with ECCV 2018, calls for high-quality, previously unpublished, works related to Computer Vision and Cultural History. Submissions for both tracks should conform to the ECCV 2018 proceedings style. Papers must be submitted online through the CMT submission system at:

https://cmt3.research.microsoft.com/VISART2018/

and will be double-blind peer reviewed by at least three reviewers.

TOPICS include but are not limited to:

- Art History and Computer Vision
- 3D reconstruction from visual art or historical sites
- Artistic style transfer from artworks to images and 3D scans
- 2D and 3D human pose estimation in art
- Image and visual representation in art
- Computer Vision for cultural heritage applications
- Authentication Forensics and dating
- Big-data analysis of art
- Media content analysis and search
- Visual Question & Answering (VQA) or Captioning for Art
- Visual human-machine interaction for Cultural Heritage
- Multimedia databases and digital libraries for artistic and art-historical research
- Interactive 3D media and immersive AR/VR environments for Cultural Heritage
- Digital recognition, analysis or augmentation of historical maps
- Security and legal issues in the digital presentation and distribution of cultural information
- Surveillance and behaviour analysis in Galleries, Libraries, Archives and Museums

INVITED SPEAKERS

- Peter Bell (Professor of Digital Humanities - Art History, Friedrich- Alexander University Nüremberg)
- Bjorn Ommer (Professor of Computer Vision, Heidelberg)
- Eva-Maria Seng (Chair of Tangible and Intangible Heritage, Faculty of Cultural Studies, University of Paderborn)
- More speakers TBC

PROGRAM COMMITTEE

To be confirmed.

ORGANIZERS: Alessio Del Bue, Istituto Italiano di Tecnologia (IIT) Leonardo Impett, EPFL & Biblioteca Hertziana, Max Planck for Art History Stuart James, Istituto Italiano di Tecnologia (IIT) Peter Hall, University of Bath Joao Paulo Costeira, ISR, Instituto Superior Técnico Peter Bell, Friedrichs-Alexander University Nüremberg

I’m very proud to announce our recent paper at Eurographics 2017 in Lyon today the spotlight was showcased, with narration by Joep Moritz. If you didn’t get the chance to see it, an extended version is now available on YouTube.

After being with Tim Weyrich’s group for almost a year and a half yesterday we had our final group lunch and a cheeky beer. It has been great working with everyone at UCL, not just in the immediate group and in that vain more shenanigans to come.

Style transfer has become a popular area of research and with public applications such as Prisma based on Neural Style Transfer [Gatys'15]. Earlier this week I wanted to answer the question does it really work for understanding the larger style and context. In contrast to [Wang'13] where they learnt an artistic stroke style, how does it compare. The British Library Flickr 1m+ dataset [BL'13] provides an interesting application of this, where there is an inherent style for transferring.

So having read the papers around this previously, I was fairly sure it would not transfer very well, but on the chance that local statistics can enforce something coherent it was worth running (and also gave me a chance to play with such networks). So by taking a few examples, these are the best results from transferring from the BL'13 to the Berkeley Segmentation Dataset (BSDS500) [BSDS500'11]

These results were achieved using the Texture Nets method [Ulyanov'16] trained on a singular example and the most visually appealing results displayed after playing with the Texture / Style weights as well as the chosen example. Code available on GitHub

What is quite interesting is if you look at this from a distance they look plausible, but it isn't till you look at one adjacent to another or zoom in you realize that these aren't actually the same style. Logical hatching patterns to describe shadows or depth are ignored, or in the cases where they are present they don't make sense.

As a mini-conclusion, style-transfer, although gaining a lot of hype has still a long way to being accurately reproduced general artistic style. Still the results are interesting and if you aren't looking for exact replication, then it is visually appealing. It must be bared in mind that the British Library dataset is challenging, where the style has been evolving for human understanding over millenniums. A problem to keep working on, possibly guided by transfer learning.

[Gatys'15] Leon A Gatys and Alexander S Ecker and Matthias Bethge. "A Neural Algorithm of Artistic Style". Arxiv (http://arxiv.org/abs/1508.06576). 2015.

[Wang'13] T Wang and J Collomosse and D Greig and A Hunter. "Learnable Stroke Models for Example-based Portrait Painting". Proc. British Machine Vision Conference (BMVC). 2013.

[BL'13] British Library Flickr. https://www.flickr.com/photos/britishlibrary/. 2013.

[BSDS500'11] Berkeley Segmentation Dataset. https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/resources.html#bsds500. 2011

[Ulyanov'16] Dmitry Ulyanov and Vadim Lebedev and Andrea Vedaldi and Victor Lempitsky. "Texture Networks: Feed-forward Synthesis of Textures and Stylized Images. Arxiv (http://arxiv.org/abs/1603.03417). 2016.

Earlier this year Tim Weyrich directed me onto a dataset published by the British Library and since then my research has focused heavily around this. Within Computer Vision it is unusual to get a large dataset not skewed to achieve a specific research goal. Sometimes datasets can be repurposed, but this is requires extensive effort to get the data in its rawest form.

The British Library dataset is a quite literally a "dump" of all the unknown to OCR elements from the book scanning performed by Microsoft. Therefore is not just a collection of line art imagery, but also of elaborate charachters or section embroidery.

I intend to post more on working with this dataset as time goes on, but for now there is a github which contains the directory of images: Directory of Images (Github)

And what is the main repository on Flickr:Flickr Repository

Image taken from Tou Blanc

I have used a variety of tools for binary, multiclass and even incremental SVM problems, today I found something quite nice in binary case for libSVM, although potentially a source of confusion.

It is common in machine learning to apply a sigmoid function to normalise the boundaries of a problem, this can by empirically defining the upper and lower bound or through experimentation. Within libSVM they do this through experimentation, that is great to save you some time. The only thing to remember is it means through the use of random and cross validation with small sets of data, you are likely to get different results on each run.

So the function to consider is this:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 | // Cross-validation decision values for probability estimates static void svm_binary_svc_probability( const svm_problem *prob, const svm_parameter *param, double Cp, double Cn, double& probA, double& probB) { int i; int nr_fold = 5; int *perm = Malloc(int,prob->l); double *dec_values = Malloc(double,prob->l); // random shuffle for(i=0;i<prob->l;i++) perm[i]=i; for(i=0;i<prob->l;i++) { int j = i+rand()%(prob->l-i); swap(perm[i],perm[j]); } for(i=0;i<nr_fold;i++) { int begin = i*prob->l/nr_fold; int end = (i+1)*prob->l/nr_fold; int j,k; struct svm_problem subprob; subprob.l = prob->l-(end-begin); subprob.x = Malloc(struct svm_node*,subprob.l); subprob.y = Malloc(double,subprob.l); k=0; for(j=0;j<begin;j++) { subprob.x[k] = prob->x[perm[j]]; subprob.y[k] = prob->y[perm[j]]; ++k; } for(j=end;j<prob->l;j++) { subprob.x[k] = prob->x[perm[j]]; subprob.y[k] = prob->y[perm[j]]; ++k; } int p_count=0,n_count=0; for(j=0;j<k;j++) if(subprob.y[j]>0) p_count++; else n_count++; if(p_count==0 && n_count==0) for(j=begin;j<end;j++) dec_values[perm[j]] = 0; else if(p_count > 0 && n_count == 0) for(j=begin;j<end;j++) dec_values[perm[j]] = 1; else if(p_count == 0 && n_count > 0) for(j=begin;j<end;j++) dec_values[perm[j]] = -1; else { svm_parameter subparam = *param; subparam.probability=0; subparam.C=1.0; subparam.nr_weight=2; subparam.weight_label = Malloc(int,2); subparam.weight = Malloc(double,2); subparam.weight_label[0]=+1; subparam.weight_label[1]=-1; subparam.weight[0]=Cp; subparam.weight[1]=Cn; struct svm_model *submodel = svm_train(&subprob,&subparam); for(j=begin;j<end;j++) { svm_predict_values(submodel,prob->x[perm[j]],&(dec_values[perm[j]])); // ensure +1 -1 order; reason not using CV subroutine dec_values[perm[j]] *= submodel->label[0]; } svm_free_and_destroy_model(&submodel); svm_destroy_param(&subparam); } free(subprob.x); free(subprob.y); } sigmoid_train(prob->l,dec_values,prob->y,probA,probB); free(dec_values); free(perm); } |

So if you have few numbers of samples, that is the case in some circumstances then the cross validation is where you hit problems. Of course you can simply re-implement it yourself or you can add a few lines to stop cross validation if the number of samples is too few.<p>

</p><p>Not the most elegant of code, but for the moment it will do. I choose to completely seperate the two steps as opposed to multiple if’s</p><pre class="brush: c++; toolbar: false">static void svm_binary_svc_probability(
``` cpp
const svm_problem *prob, const svm_parameter *param,
double Cp, double Cn, double& probA, double& probB)
{
int i;
int nr_fold = 5;
int *perm = Malloc(int,prob->l);
double *dec_values = Malloc(double,prob->l);

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 | // random shuffle for(i=0;i<prob->l;i++) perm[i]=i; for(i=0;i<prob->l;i++) { int j = i+rand()%(prob->l-i); swap(perm[i],perm[j]); } if (prob->l < (5*nr_fold)){ int begin = 0; int end = prob->l; int j,k; struct svm_problem subprob; subprob.l = prob->l; subprob.x = Malloc(struct svm_node*,subprob.l); subprob.y = Malloc(double,subprob.l); k=0; for(j=0;j<prob->l;j++) { subprob.x[k] = prob->x[perm[j]]; subprob.y[k] = prob->y[perm[j]]; ++k; } int p_count=0,n_count=0; for(j=0;j<k;j++) if(prob->y[j]>0) p_count++; else n_count++; if(p_count==0 && n_count==0) for(j=begin;j<end;j++) dec_values[perm[j]] = 0; else if(p_count > 0 && n_count == 0) for(j=begin;j<end;j++) dec_values[perm[j]] = 1; else if(p_count == 0 && n_count > 0) for(j=begin;j<end;j++) dec_values[perm[j]] = -1; else { svm_parameter subparam = *param; subparam.probability=0; subparam.C=1.0; subparam.nr_weight=2; subparam.weight_label = Malloc(int,2); subparam.weight = Malloc(double,2); subparam.weight_label[0]=+1; subparam.weight_label[1]=-1; subparam.weight[0]=Cp; subparam.weight[1]=Cn; struct svm_model *submodel = svm_train(&subprob,&subparam); for(j=begin;j<end;j++) { svm_predict_values(submodel,prob->x[perm[j]],&(dec_values[perm[j]])); // ensure +1 -1 order; reason not using CV subroutine dec_values[perm[j]] *= submodel->label[0]; } svm_free_and_destroy_model(&submodel); svm_destroy_param(&subparam); } free(subprob.x); free(subprob.y); }else{ for(i=0;i<nr_fold;i++) { int begin = i*prob->l/nr_fold; int end = (i+1)*prob->l/nr_fold; if (nr_fold == 1){ begin = 0 ; end = prob->l; } int j,k; struct svm_problem subprob; subprob.l = prob->l-(end-begin); subprob.x = Malloc(struct svm_node*,subprob.l); subprob.y = Malloc(double,subprob.l); k=0; for(j=0;j<begin;j++) { subprob.x[k] = prob->x[perm[j]]; subprob.y[k] = prob->y[perm[j]]; ++k; } for(j=end;j<prob->l;j++) { subprob.x[k] = prob->x[perm[j]]; subprob.y[k] = prob->y[perm[j]]; ++k; } int p_count=0,n_count=0; for(j=0;j<k;j++) if(subprob.y[j]>0) p_count++; else n_count++; if(p_count==0 && n_count==0) for(j=begin;j<end;j++) dec_values[perm[j]] = 0; else if(p_count > 0 && n_count == 0) for(j=begin;j<end;j++) dec_values[perm[j]] = 1; else if(p_count == 0 && n_count > 0) for(j=begin;j<end;j++) dec_values[perm[j]] = -1; else { svm_parameter subparam = *param; subparam.probability=0; subparam.C=1.0; subparam.nr_weight=2; subparam.weight_label = Malloc(int,2); subparam.weight = Malloc(double,2); subparam.weight_label[0]=+1; subparam.weight_label[1]=-1; subparam.weight[0]=Cp; subparam.weight[1]=Cn; struct svm_model *submodel = svm_train(&subprob,&subparam); for(j=begin;j<end;j++) { svm_predict_values(submodel,prob->x[perm[j]],&(dec_values[perm[j]])); // ensure +1 -1 order; reason not using CV subroutine dec_values[perm[j]] *= submodel->label[0]; } svm_free_and_destroy_model(&submodel); svm_destroy_param(&subparam); } free(subprob.x); free(subprob.y); } } sigmoid_train(prob->l,dec_values,prob->y,probA,probB); free(dec_values); free(perm); } ``` |

As with a lot of my code based posts, this is more for my memory than anything, but hopefully may help people unlock the secrets of libSVM.