As I understand it from the documentation, the smallest delay you can do with the tapin/tapout combination is one signal vector size. According to the delay~ reference, you can have delays smaller than the vector size, thus creating 1-sample delays. However, if I add a number equivalent to the vector size to the desired number of samples, then convert to ms, tapin/tapout *appears* to behave just like delay~. Am I imagining that? Adding to the size of the delay for tapin/tapout doesn't make sense to me.