i think i found a "bug", not really sure if its due to the buggy jpg.church site or the program not scrapping everything (most likely the former), but the program doesnt seem to download everything from a specific link. that is, some uploaders have a nsfw tags on some pics that you need to login first to view them.
after that you finally get to view them on the site.... but even then, even after getting the new links you still cant download them. kind of weird.
so the only way to work around this is, make sure you can view all the pics in the site (nothing is hidden), if not then create an account log in, after that go to embed codes > use direct links viewing, copy each one of those links, then paste it on the program. now it will download everything. the only downside of this is the folder output wont have the correct folder name assigned. other than everything is correct (the pictures, the resolutions, the memory size, and msc. etc)
it works but with just more extra steps.
ps. imo the stuck on "urls.txt" is probably due to the source of the site being slow (referring to jpg.church), you probably wont see this hangup in other site sources.