Anthony van Dyck’s Portrait of Don Felipe de Guzmán: When Skepticism Turns into Collaboration

Art authentication has never been a straight line of progress. It advances in fits and starts, shaped as much by skepticism as by discovery. Traditional connoisseurship, grounded in long looking and deep familiarity, remains one of its central pillars. Over time, it has been joined by scientific tools, from spectroscopy and X-ray imaging in the nineteenth and twentieth centuries to the more recent emergence of artificial intelligence. Each new method has arrived with promise and provocation. And each, inevitably, has met resistance.

That resistance is not difficult to understand. Authentication deals in reputations, markets, and careers, and any new tool that claims insight into authorship challenges long-established authority. In 2023, one of the most articulate skeptics of AI’s role in this field was Niels Büttner, Professor of Medieval and Modern Art History at the State Academy of Fine Arts in Stuttgart. In a published article, he voiced sharp criticism of AI-based authentication in general, and of Art Recognition’s technology in particular. At the time, the relationship was openly adversarial.

What followed was not a retreat, but an invitation to test assumptions on both sides.

The opportunity came in the form of a long-debated portrait attributed to Anthony van Dyck. The painting depicts Don Diego Messía Felipe de Guzmán, Marqués de Leganés, a powerful Spanish nobleman and military commander. While the composition is well known, the attribution of this specific version has been contested for years. Many scholars have argued that it is not an autograph work by van Dyck himself, but rather a product of his workshop or a close follower.

The complexity of the case lies in the existence of multiple versions. The original composition, securely attributed to van Dyck, is held by the Tokyo Fuji Art Museum (Fig. 2). A second version is preserved at the Fundación Santander in Madrid (Fig. 3). The painting under investigation sits uneasily between these reference points, close enough to provoke attribution, but divergent enough to raise doubt.

This made it an ideal test case for interdisciplinary collaboration. For the AI analysis, Art Recognition assembled a training dataset of confirmed van Dyck works, with selections validated directly by Büttner himself. At the same time, Büttner undertook a traditional connoisseurial examination of the painting. His conclusion was clear and unequivocal. The work was not autograph. It did not originate from van Dyck’s own hand, a judgment that aligned with earlier scholarly assessments.

The AI model reached the same conclusion, independently. The painting was classified as not authentic with a probability of 79 percent. While AI results rarely mirror human conclusions perfectly, the convergence in this case was striking. Two fundamentally different modes of analysis, one grounded in historical expertise and visual intuition, the other in statistical pattern recognition, arrived at the same answer.

What makes this case especially instructive is not the verdict itself, but the process. A vocal critic of AI did not simply reject the technology outright. Instead, he engaged with it, tested it, challenged its assumptions, and helped shape its parameters. The result was not capitulation, but collaboration. That collaboration culminated in a joint scholarly article, marking a rare moment of methodological bridge-building in a field often divided by entrenched positions.

The Van Dyck case demonstrates what AI can be when it is treated neither as an oracle nor as a threat. It becomes a tool, one that gains credibility not by replacing expertise, but by standing up to it. In doing so, it points toward a future of authentication that is less about rivalry between methods and more about their convergence. This case was also the subject of an episode of Is It? the podcast, which you can listen to here.

< Back