We consider a community of users who must make periodic decisions about whether to interact with one another. We propose a protocol which allows honest users to reliably interact with each other, while limiting the damage done by each malicious or incompetent user. The worst-case cost per user is sublinear in the average number of interactions per user and is independent of the number of users. Our guarantee holds simultaneously for every group of honest users. For example, multiple groups of users with incompatible tastes or preferences can coexist. As a motivating example, we consider a game where players have periodic opportunities to do one another favors but minimal ability to determine when a favor was done. In this setting, our protocol achieves nearly optimal collective welfare while remaining resistant to exploitation. Our results also apply to a collaborative filtering setting where users must make periodic decisions about whether to interact with resources such as movies or restaurants. In this setting, we guarantee that any set of honest users achieves a payoff nearly as good as if they had identified the optimal set of items in advance and then chosen to interact only with resources from that set.