is that one vertex might be shared by arbitrarily many triangles, so there would be no way to guarantee that two vertices don't end up with, say, (1, 0, 0). You would have to split all vertices with valence >= 2 beforehand in the way that game engines do for vertices along sharp edges, which would increase vertex shading load (because you would lose the post-transform cache) and bloat the sizes of the VBO/IBO. I guess you could also use geometry/tessellation shaders to address this problem as well, which would be faster than the pre-split approach but still slower than the technique described in the article (but would be fine if you're just using wireframe as a debugging aid). With the barycentric coordinates extension you don't need any of these hacks and can solve the distance-to-edge problem naturally.
It's great that cross-vendor barycentric coordinate support is finally arriving in Vulkan. I've been using them on Apple platforms for a while now, the ability to do interpolation manually gives you a lot of flexibility in how you lay out and structure your geometry data.
I am particularly exited in combining these features with mesh shaders. We are finally at a point where fixed-function primitive data specification can be retired in favour of a fully programmable massively parallel model. In the end, everything is jut a compute shader running over a grid, where elements in the grid can be groups of objects, objects, vertices, pixels or samples. Just lay out your geometry data in the way that is actually beneficial to your application and generate the rest on the fly. It's refreshing and liberating. Maybe at some point we can even retire the fixed-function rasterisation, although that one is a bit more tricky (especially with massive benefits of tiled rasterisation and shading).
This is great. It's always been an exercise in frustration to do wireframes well, especially with hidden line removal. But this has a lot of applications elsewhere, like cel shading, decals, user interfaces.