Privacy is of crucial importance in the era of global connectivity in which everything is inter-connected anytime and everywhere, with almost 2 billion world-wide users with connection to the Internet. Indeed, most of these users are concerned about their privacy. These concerns also apply for the new emerging research fields in computer science such as Multi-agent Systems. A Multi-agent System consists of a number of agents (which can be intelligent and/or autonomous) that interact with one-another. An agent usually encapsulates personal information describing its principal (names, preferences, tastes, credit card numbers, etc.). Moreover, agents carry out interactions on behalf of their principals. As a result, agents usually exchange personal information about their principals. This may have a direct impact on their principals' privacy. In this thesis, we focus on avoiding undesired information collection and information processing in Multi-agent Systems. In order to avoid undesired information collection we propose a decision-making model for agents to decide whether or not to disclose personal information to other agents is acceptable or not. We also contribute a secure Agent Platform that allow agents to communicate with each other in a confidential fashion, i.e., external third parties cannot collect the information that two agents exchange. In order to avoid undesired information processing, we propose an identity management model for agents in a Multi-agent System. This model avoids undesired information processing by allowing agents to hold as many identities as needed for minimizing data identifiability, i.e., the degree by which personal information can be directly attributed to a particular principal. Finally, we describe how we implemented this model into an existing agent platform.