As with most things, it depends...
The longer a transaction is open against a set of Tables, the longer those Tables are locked for. Locking on a Table causes contention against other queries trying to read from and write to that Table. So breaking up a transaction into multiple smaller transactions reduces the continuous runtime that those Tables are locked for. It also may better isolate only locking the Tables that need to be absolutely locked for that part of the code.
But a lot of relational problems are best solved in a relational manner (as opposed to iteratively or procedurally), and there is other downtime in creating and committing (or rolling back) multiple transactions and taking out the appropriate locks for those transactions that sometimes a single transaction is best for minimizing.
For example, if you're trying to update 100 rows in a Table, in most cases updating all 100 rows in a single transaction / single UPDATE statement will be a lot quicker than taking out 100 transactions and iteratively looping through the table 100 times, row by row.
Generally the tradeoff is, the query you're trying to run will run the quickest itself with the least amount of transactions, but will potentially cause the most amount of contention on the Tables involved in that query for the rest of the server / other queries accessing those Tables. By breaking the query / logic up into multiple transactions you likely reduce the overall contention for those Tables for the rest of the server but may incur some performance reduction in that specific query that was broken up into multiple transactions.
So the decision to be made will be based off of how important it is for that specific query to run quickly vs the busyness on the rest of the server involving the same Tables that query uses, and how long they can afford to sit waiting to perform their own operations.
DELETEon a Table using your CTE as your source will be faster than looping through and doing multipleDELETES(relational logic as opposed to iterative). A cursor is inherently slower than a loop in SQL Server so will naturally usually be slower when running the same logic iteratively with a cursor vs a loop. Cursors are almost always recommended against for notoriously being so underperforming, so that is outside the context of your question and unrelated to... – J.D. Feb 10 '21 at 00:58DELETEstatement will almost always be slower in total runtime for an appropriately equivalent execution plan, or negligibly similar. The point stands though, relational solutions are usually most efficient for relational problems, and realistically if you're manipulating anything shy of 100,000... – J.D. Feb 10 '21 at 01:43