💼Crack Deloitte Salesforce Developer Interview (2026): Top 50 Questions with Expert Answers

Published On: May 2, 2026

For Candidates with 3–5 Years of Experience


Deloitte is one of the Big Four consulting firms with a massive Salesforce practice. Their interviews for mid-level Salesforce developers are rigorous — expect a mix of deep technical questions, scenario-based problems, and consulting-mindset challenges. This guide covers the most frequently asked questions with answers calibrated for 3–5 years of experience.


Table of Contents

📋 Table of Contents

  1. Apex & Core Development (Q1–Q12)
  2. Lightning Web Components & UI (Q13–Q20)
  3. Integration & APIs (Q21–Q28)
  4. Data Modeling & SOQL/SOSL (Q29–Q34)
  5. Security, Sharing & Governance (Q35–Q39)
  6. Automation & Flow (Q40–Q43)
  7. DevOps, Testing & Deployment (Q44–Q47)
  8. Scenario / Consulting-Mindset (Q48–Q50)

SECTION 1: Apex & Core Development


Q1. What is the difference between trigger.new and trigger.newMap in Apex triggers? When would you use each?

Answer:

Trigger.new returns a List of the new versions of sObject records involved in the trigger. Trigger.newMap returns a Map<Id, sObject> of those same records, keyed by their record ID.

When to use each:

  • Use Trigger.new when you need to iterate over all records in order or when records don’t yet have IDs (i.e., in before insert context, IDs aren’t assigned yet, so Trigger.newMap is null).
  • Use Trigger.newMap when you need fast O(1) lookup of a specific record by ID — for example, when comparing parent records or doing cross-object lookups.

Example scenario: In a before update trigger, if you need to find which records changed a specific field, you’d do:

for (Id recId : Trigger.newMap.keySet()) {
    if (Trigger.newMap.get(recId).Status__c != Trigger.oldMap.get(recId).Status__c) {
        // field changed
    }
}

Q2. Explain the concept of “bulkification” in Apex. Why is it critical at Deloitte-level enterprise implementations?

Answer:

Bulkification means writing Apex code that can handle up to 200 records per transaction — the maximum records Salesforce processes in a single trigger execution — without hitting governor limits.

Key principles:

  1. Never put SOQL/DML inside a loop. Always collect IDs, query outside the loop, then process in a map.
  2. Use collections (Lists, Maps, Sets) to batch operations.
  3. Aggregate logic before committing DML.

Why it matters at enterprise scale: Deloitte implementations often involve data migrations, batch ETL loads, and integrations pushing thousands of records. Non-bulkified code will throw LimitException errors in production, causing data integrity failures that are expensive to remediate on client engagements.

Bad pattern:

for (Account acc : Trigger.new) {
    List<Contact> contacts = [SELECT Id FROM Contact WHERE AccountId = :acc.Id]; // SOQL in loop!
}

Good pattern:

Set<Id> accIds = Trigger.newMap.keySet();
Map<Id, List<Contact>> contactMap = new Map<Id, List<Contact>>();
for (Contact c : [SELECT Id, AccountId FROM Contact WHERE AccountId IN :accIds]) {
    if (!contactMap.containsKey(c.AccountId)) contactMap.put(c.AccountId, new List<Contact>());
    contactMap.get(c.AccountId).add(c);
}

Q3. What is a “future method” and what are its limitations? How does it compare to Queueable Apex?

Answer:

A @future method is an Apex method that runs asynchronously in its own transaction. It’s used when you need to perform callouts from triggers or escape a governor limit context.

Limitations of @future:

  • Cannot be called from another @future or Batch class
  • Cannot pass sObjects as parameters (only primitives or collections of primitives)
  • No job chaining
  • No way to monitor job status

Queueable Apex advantages over @future:

Feature@futureQueueable
Job Chaining✅ (via System.enqueueJob in execute())
sObject parameters
Job ID returned
Can call from Batch

Interview tip: At Deloitte, the preference in modern code is Queueable Apex for async processing unless the use case is very simple and legacy-constrained.


Q4. Explain the difference between with sharing, without sharing, and inherited sharing in Apex classes.

Answer:

These keywords control whether Org-Wide Defaults (OWD) and sharing rules are enforced when Apex code runs.

  • with sharing: Enforces the running user’s sharing rules. Records the user cannot access are excluded from SOQL results. Use for any class that processes user-facing data.
  • without sharing: Ignores sharing rules entirely. The class can access all records regardless of user permissions. Use only for system-level operations like batch jobs or integration handlers where you need access to all records.
  • inherited sharing: Introduced in API v48. The class inherits the sharing context of its caller. If called from a with sharing class, it runs with sharing. This is the recommended default for utility/helper classes that shouldn’t dictate their own sharing behavior.

Best practice for Deloitte interviews: Always declare sharing on every class. Never leave it implicit. Use inherited sharing for service layer classes that are called from multiple contexts.


Q5. What is a Virtual vs Abstract class in Apex? Give a practical use case for each.

Answer:

Abstract class:

  • Cannot be instantiated directly
  • Can contain abstract methods (no implementation) that subclasses must override
  • Use case: A base TriggerHandler framework where each trigger handler must implement handleBeforeInsert(), handleAfterUpdate(), etc.
public abstract class TriggerHandlerBase {
    public abstract void handleBeforeInsert(List<SObject> newRecords);
}

Virtual class:

  • Can be instantiated
  • Methods marked virtual can be optionally overridden
  • Use case: A base EmailNotificationService where subclasses can override getEmailTemplate() but don’t have to
public virtual class EmailNotificationService {
    public virtual String getEmailTemplate() {
        return 'DefaultTemplate';
    }
}

Key difference: Abstract = subclass MUST override abstract methods. Virtual = subclass CAN override virtual methods.


Q6. What are governor limits you monitor most closely in an enterprise Apex context? How do you handle approaching limits?

Answer:

The limits I watch most closely are:

LimitCap
SOQL queries per transaction100
DML statements150
DML rows10,000
CPU time10,000ms (sync), 60,000ms (async)
Heap size6MB (sync), 12MB (async)
Callouts100

Strategies to handle near-limit scenarios:

  1. Offload to async: Move expensive processing to Queueable or Batch Apex to get a fresh set of limits.
  2. Batch Apex for large data volumes: Break 100k+ record operations into 200-record chunks.
  3. Use Limits class proactively: Limits.getQueries() / Limits.getLimitQueries() to guard conditionally.
  4. Avoid re-querying: Cache results in static variables within a transaction.
  5. Review automation stacking: At Deloitte, complex orgs often have Flows + Triggers + Process Builders firing together — audit and consolidate.

Q7. How do you implement the Trigger Handler pattern? Why does Deloitte prefer it over logic directly in triggers?

Answer:

The Trigger Handler pattern separates trigger logic from business logic by keeping triggers thin (just routing calls) and putting all logic in dedicated handler classes.

Standard trigger (thin):

trigger AccountTrigger on Account (before insert, before update, after insert, after update) {
    AccountTriggerHandler handler = new AccountTriggerHandler();
    if (Trigger.isBefore && Trigger.isInsert) handler.onBeforeInsert(Trigger.new);
    if (Trigger.isAfter && Trigger.isUpdate) handler.onAfterUpdate(Trigger.new, Trigger.oldMap);
}

Why Deloitte prefers this:

  • Testability: Handler classes are independently testable without trigger context
  • Maintainability: Multiple developers can work on handlers without merge conflicts in a single trigger file
  • One trigger per object rule: Prevents ordering issues from multiple triggers
  • Bypass mechanism: Can add TriggerSettings__c custom metadata to disable triggers during data loads without deploying code

Q8. Explain the difference between Database.insert() with allOrNone=false vs. the insert DML statement.

Answer:

The standard insert DML statement uses all-or-none behavior by default: if any single record fails, the entire operation rolls back.

Database.insert(records, false) enables partial success: records that pass are committed, failed records are returned in Database.SaveResult[] with error details. Your code can then handle or log failures gracefully.

When to use partial DML at Deloitte:

  • Integration handlers processing inbound data from external systems where some records may be malformed
  • Batch jobs processing large datasets where you don’t want one bad record to block 199 others
  • Always log failures to a custom Error Log object for client visibility
List<Database.SaveResult> results = Database.insert(accountList, false);
for (Database.SaveResult sr : results) {
    if (!sr.isSuccess()) {
        for (Database.Error err : sr.getErrors()) {
            System.debug('Error: ' + err.getMessage());
            // Log to Error_Log__c
        }
    }
}

Q9. What is @TestSetup and how does it improve test performance?

Answer:

@TestSetup is a method annotation that creates test data once per test class and rolls it back between test methods (each method gets a fresh copy of the data). Without it, every @isTest method creates its own data, multiplying DML operations.

@TestSetup
static void setup() {
    Account acc = new Account(Name = 'Test Corp');
    insert acc;
    // Creates once, available to ALL test methods in the class
}

Performance benefit: In a test class with 20 test methods, @TestSetup reduces DML from 20× to 1× for setup data. This dramatically reduces test execution time in large Salesforce orgs with thousands of test methods — critical for Deloitte CI/CD pipelines.

Important nuance: @TestSetup data is re-queried at the start of each test method (it’s not shared in memory) — so any changes made in one test method don’t affect others.


Q10. How do you prevent recursive trigger execution in Salesforce?

Answer:

Recursive triggers occur when a trigger fires, its DML causes another trigger to fire on the same object, creating an infinite loop (or hitting Maximum trigger depth exceeded error).

Standard approach — static boolean flag:

public class TriggerHelper {
    public static Boolean isExecuting = false;
}

trigger AccountTrigger on Account (after update) {
    if (!TriggerHelper.isExecuting) {
        TriggerHelper.isExecuting = true;
        // logic here
        TriggerHelper.isExecuting = false;
    }
}

Limitation of boolean flag: It blocks ALL subsequent trigger executions in the transaction, even legitimate ones for different records.

Better approach — Set-based tracking:

public class TriggerHelper {
    public static Set<Id> processedIds = new Set<Id>();
}
// In trigger: only process IDs not already in the set

This allows different records to be processed while preventing the same record from being processed twice.


Q11. What is the difference between SOQL and SOSL? When would you use SOSL?

Answer:

SOQL (Salesforce Object Query Language): Queries a single object and its related objects. Best for precise, field-specific queries when you know the object.

SOSL (Salesforce Object Search Language): Searches across multiple objects and fields simultaneously using text search. Returns a List<List<SObject>>.

Use SOSL when:

  • You need to search across Account, Contact, Lead, and Opportunity simultaneously (global search)
  • You’re building a search feature and don’t know which object has the data
  • The search term could appear in multiple text fields
List<List<SObject>> results = [FIND 'Deloitte*' IN ALL FIELDS 
    RETURNING Account(Id, Name), Contact(Id, FirstName, LastName)];

SOSL limitation: Cannot be used in triggers (it’s expensive and may return unexpected results in bulk contexts). Minimum 2 characters required for search term.


Q12. Explain Platform Events and how they differ from Custom Notifications or Change Data Capture.

Answer:

Platform Events: A publish-subscribe messaging framework built on Salesforce’s event bus. Publishers fire events; any subscriber (Apex trigger, Flow, or external system) processes them asynchronously.

Change Data Capture (CDC): Automatically publishes change events when Salesforce records are created, updated, deleted, or undeleted. You don’t define the event — Salesforce does. Best for real-time data replication to external systems.

Custom Notifications: Push notifications delivered to Salesforce users in the app or mobile. Not for system-to-system communication.

FeaturePlatform EventsCDCCustom Notifications
DirectionAny systemSF → ExternalSF → Users
SchemaYou defineSF definesN/A
Use caseIntegration middlewareData syncUser alerts
Replay available✅ (72 hrs)✅ (3 days)

Deloitte context: Platform Events are commonly used as an integration backbone — external systems publish events that Salesforce processes, decoupling systems and avoiding tight integration dependencies.


SECTION 2: Lightning Web Components & UI


Q13. Explain the LWC component lifecycle hooks in order of execution.

Answer:

LWC lifecycle hooks fire in this order:

  1. constructor() — Called when component is created. DOM not yet rendered. Don’t access child components here.
  2. connectedCallback() — Called when component is inserted into the DOM. Use for initialization, event listener setup, or imperative data fetch.
  3. render() — Called to determine which template to render (used in conditional rendering scenarios).
  4. renderedCallback() — Called after every render (initial + subsequent). Use for DOM manipulation. Beware of infinite loops if you update reactive properties here.
  5. errorCallback(error, stack) — Called if a child component throws an error. Acts as an error boundary.
  6. disconnectedCallback() — Called when component is removed from DOM. Use for cleanup (removing event listeners, clearing timers).

Interview tip: A common Deloitte question is “where would you make a wire call vs. an imperative call?” — Wire calls are declarative and reactive (re-run when params change). Imperative calls in connectedCallback are useful for one-time fetches or when you need to handle the promise explicitly.


Q14. What is the difference between @track, @api, and @wire decorators in LWC?

Answer:

@api: Exposes a property or method as public — accessible from parent components. Used to pass data down the component tree. Any change from the parent triggers re-render.

@track: Makes a property reactive to deep mutations (nested object/array changes). In modern LWC (API 39+), all properties are reactive to top-level reassignment by default. @track is only needed when you mutate a nested property of an object without reassigning the object itself.

@wire: Declaratively connects a property or function to a Salesforce data source (Apex method, wire adapters like getRecord, getObjectInfo). Automatically refreshes when tracked parameters change.

@api recordId; // Public, passed from parent
@track filters = { status: 'Open', priority: 'High' }; // Deep reactivity needed
@wire(getOpportunities, { accountId: '$recordId' }) opportunities; // Reactive wire

Q15. How do you communicate between sibling LWC components?

Answer:

LWC components follow a unidirectional data flow — data flows down via @api, events bubble up. Siblings can’t communicate directly.

Approaches for sibling communication:

  1. Through a common parent: Child A fires a custom event → Parent catches it → Parent updates @api property on Child B. This is the recommended pattern for tightly related components.
  2. Lightning Message Service (LMS): The standard Salesforce-provided pub/sub service. Components subscribe to a Message Channel; any component on the page can publish to it. Works across DOM hierarchies, Visualforce iframes, and Aura components.
// Publisher
import { publish, MessageContext } from 'lightning/messageService';
import ACCOUNT_SELECTED from '@salesforce/messageChannel/AccountSelected__c';
publish(this.messageContext, ACCOUNT_SELECTED, { accountId: this.selectedId });

// Subscriber
import { subscribe, MessageContext } from 'lightning/messageService';
this.subscription = subscribe(this.messageContext, ACCOUNT_SELECTED, (message) => {
    this.handleMessage(message);
});
  1. Custom pub/sub library (legacy): A shared singleton module. Avoid in new development — LMS is the official replacement.

Q16. What is @salesforce/apex wire adapter vs. imperative Apex call? When would you choose each?

Answer:

Wire (reactive):

@wire(getAccountList, { industry: '$selectedIndustry' })
wiredAccounts({ error, data }) { ... }
  • Automatically called and re-called when $selectedIndustry changes
  • Results are cached by Salesforce
  • Best for read-only data that depends on reactive inputs

Imperative:

async handleSearch() {
    try {
        const result = await getAccountList({ industry: this.selectedIndustry });
        this.accounts = result;
    } catch(error) { ... }
}
  • Explicit control over when the call is made
  • Can be triggered by user action (button click)
  • Required for mutations (insert/update/delete) — wire is read-only
  • Needed when you want to handle loading states or errors differently

Deloitte preference: Use wire for initial data loading; use imperative for user-triggered actions, form submissions, and any server-side DML operations.


Q17. What is the difference between lightning-record-form, lightning-record-view-form, and lightning-record-edit-form?

Answer:

ComponentReadWriteControl over layout
lightning-record-formLimited
lightning-record-view-formFull control over fields
lightning-record-edit-formFull control over fields

lightning-record-form: Quickest to implement. Renders a complete form in view or edit mode. Good for standard use cases but limited customization.

lightning-record-view-form: Use when you need a read-only display with custom layout, conditional rendering of fields, or custom styling around specific fields.

lightning-record-edit-form: Use when you need a custom edit form — custom submit logic, custom validation messages, conditional field visibility, or complex field dependencies.

Deloitte tip: For complex forms on client projects, lightning-record-edit-form with custom validation using reportValidity() is the standard pattern because it gives full control while still honoring field-level security.


Q18. Explain slots in LWC. What are named slots and default slots?

Answer:

Slots allow parent components to inject HTML content into designated areas of a child component — enabling flexible, reusable component composition.

Default slot: Accepts any content placed between the child component tags in the parent.

<!-- Child: card.html -->
<div class="card"><slot></slot></div>

<!-- Parent -->
<c-card>
    <p>This content goes into the default slot</p>
</c-card>

Named slots: Allow multiple injection points with specific names.

<!-- Child: modal.html -->
<div class="modal">
    <div class="header"><slot name="header"></slot></div>
    <div class="body"><slot></slot></div>
    <div class="footer"><slot name="footer"></slot></div>
</div>

<!-- Parent -->
<c-modal>
    <span slot="header">Confirm Delete</span>
    <p>Are you sure?</p>
    <button slot="footer" onclick={handleConfirm}>Yes, Delete</button>
</c-modal>

Deloitte use case: Modal components, card containers, and layout wrappers at the design system level heavily use named slots to allow teams to compose pages without rebuilding containers.


Q19. What is @salesforce/label, @salesforce/i18n, and how do you handle internationalization in LWC?

Answer:

@salesforce/label: Imports Custom Labels for use in JavaScript or templates. Custom Labels are translatable strings stored in Salesforce — they automatically serve the right translation based on the user’s language setting.

import SAVE_BUTTON from '@salesforce/label/c.SaveButton';
import ERROR_MESSAGE from '@salesforce/label/c.ErrorMessage';

@salesforce/i18n: Provides locale-specific formatting — number format, currency, date format — based on the user’s locale.

import LOCALE from '@salesforce/i18n/locale'; // e.g., "en-US", "de-DE"
import CURRENCY from '@salesforce/i18n/currency'; // e.g., "USD", "EUR"

Best practices for Deloitte global clients:

  • Never hardcode strings in components — always use Custom Labels
  • Use lightning-formatted-number and lightning-formatted-date-time components which auto-locale-format
  • Test in multiple languages using Salesforce’s language settings in sandbox

Q20. How do you optimize LWC performance for a component that renders a large list of records?

Answer:

Key techniques:

  1. Pagination on the server side: Never load all records at once. Implement OFFSET/LIMIT in SOQL or use cursor-based pagination. Return 50-100 records per page.
  2. Virtual scrolling / infinite scroll: Load the next batch as the user scrolls using IntersectionObserver API. lightning-datatable has built-in enable-infinite-loading.
  3. Memoize computed properties: Avoid expensive computations in getters that recalculate on every render. Cache results in reactive properties.
  4. Debounce search/filter inputs: Don’t fire a server call on every keypress. Implement a 300ms debounce.
  5. Use lightning-datatable for tabular data: It’s optimized for rendering large datasets with sorting, pagination, and row actions built in.
  6. Lazy load child components: Use if:true / lwc:if directives to render heavy components only when needed.
  7. Avoid unnecessary reactive property updates: Batch updates to avoid multiple re-renders within a single user action.

SECTION 3: Integration & APIs


Q21. What is the difference between REST and SOAP APIs in Salesforce integrations? When would you use each?

Answer:

REST API:

  • Uses HTTP methods (GET, POST, PATCH, DELETE)
  • Returns JSON or XML
  • Stateless, lightweight, easy to consume from modern clients
  • Supports standard CRUD + bulk operations
  • Best for: Modern integrations, mobile apps, web apps, microservices

SOAP API:

  • Uses XML-based WSDL contracts
  • Strongly typed — great for enterprises needing strict contracts
  • Supports all standard operations + some not in REST (e.g., merge, undelete, getUserInfo)
  • Best for: Legacy enterprise systems (ERP, mainframes), Java/.NET integrations that already use SOAP

Deloitte context: In most modern Deloitte Salesforce implementations, REST is preferred. SOAP is used when integrating with legacy SAP, Oracle ERP, or IBM WebSphere systems that predate REST, or when the client’s middleware layer mandates SOAP.


Q22. What is the Salesforce Bulk API? When would you use Bulk API 2.0 vs. REST API?

Answer:

The Bulk API is designed for loading or querying large datasets asynchronously — it processes records in batches in the background without consuming synchronous API limits.

Bulk API 2.0 (recommended):

  • Simplified — just upload a CSV, get results
  • No batch management needed (Salesforce handles it)
  • Supports: insert, update, upsert, delete, hardDelete, query
  • Best for: 10,000+ records in a single operation

When to use Bulk API 2.0 vs REST:

ScenarioUse
Loading 500k records from legacy systemBulk API 2.0
Real-time single record update from UIREST API
Nightly ETL sync of 50k ordersBulk API 2.0
Mobile app creating a leadREST API
Data migration projectBulk API 2.0

Deloitte tip: Always use Bulk API for data migration projects. REST API for real-time integration flows.


Q23. What is a Named Credential and why should it be used over hardcoded endpoint URLs?

Answer:

A Named Credential is a Salesforce configuration object that stores an external endpoint URL and authentication details (OAuth, Basic Auth, JWT, etc.) securely — separate from code.

Why use Named Credentials:

  1. Security: Authentication secrets are never stored in Apex code or Custom Settings — they’re encrypted and managed by Salesforce.
  2. Portability: When deploying from sandbox to production, you don’t need to update hardcoded URLs — the Named Credential is environment-specific.
  3. No Remote Site Settings needed: Named Credentials automatically bypass Remote Site Settings for their defined endpoint.
  4. Merged credentials: Callouts using Named Credentials automatically include auth headers — no manual Authorization header building.
// Without Named Credential (BAD)
HttpRequest req = new HttpRequest();
req.setEndpoint('https://api.externalservice.com/v1/data');
req.setHeader('Authorization', 'Bearer ' + someHardcodedToken); // ❌

// With Named Credential (GOOD)
req.setEndpoint('callout:ExternalServiceNC/v1/data'); // ✅

Q24. How do you handle Salesforce outbound callouts and what are the key considerations?

Answer:

Making a callout from Apex:

public class ExternalServiceClient {
    public static String fetchData(String endpoint) {
        Http http = new Http();
        HttpRequest request = new HttpRequest();
        request.setEndpoint('callout:MyNamedCredential' + endpoint);
        request.setMethod('GET');
        request.setTimeout(10000); // 10 second timeout
        
        HttpResponse response = http.send(request);
        
        if (response.getStatusCode() == 200) {
            return response.getBody();
        } else {
            throw new CalloutException('Error: ' + response.getStatus());
        }
    }
}

Key considerations:

  1. Cannot callout after DML in same transaction — Use @future(callout=true) or Queueable with Database.AllowsCallouts
  2. Timeout max: 120 seconds
  3. Max callouts per transaction: 100
  4. Mock callouts in tests: Implement HttpCalloutMock interface — you can’t make real callouts in test context
  5. Error handling: Always handle non-2xx status codes, network timeouts, and parse errors explicitly
  6. Idempotency: For POST/PATCH, implement retry logic with idempotency keys if the external system supports it

Q25. What is MuleSoft and how does it fit in a Deloitte Salesforce integration architecture?

Answer:

MuleSoft (owned by Salesforce) is an Integration Platform as a Service (iPaaS) — an API-led connectivity platform that acts as an integration middleware layer between Salesforce and other enterprise systems.

Typical Deloitte integration architecture:

SAP ERP ──┐
Oracle DB ─┤──► MuleSoft Anypoint Platform ──► Salesforce
Legacy CRM ┘         (Transformation, Routing,
                      Error Handling, Retry)

Why MuleSoft over direct Salesforce-to-system integration:

  • Decoupling: Systems connect to MuleSoft APIs, not directly to each other — changes in one system don’t break others
  • Transformation: MuleSoft handles data format transformation (XML to JSON, field mapping) outside Salesforce
  • Reusability: The same MuleSoft API can serve multiple consumers (Salesforce, mobile, portals)
  • Monitoring: Centralized integration monitoring, alerting, and retry logic
  • Security: Single authentication/authorization gateway

Deloitte preference: On enterprise clients, MuleSoft is the standard integration backbone. Point-to-point integrations (Salesforce directly calling SAP) are avoided for their brittleness.


Q26. What is Salesforce Connect and External Objects? When would you use them?

Answer:

Salesforce Connect allows Salesforce to display data from external systems in real time as if it were native Salesforce data — without copying the data into Salesforce.

External Objects are Salesforce objects backed by external data sources (OData 2.0/4.0, custom adapters). They behave like custom objects in the UI — you can build page layouts, relate them to standard objects, and create reports — but the data lives in an external system.

When to use Salesforce Connect:

  • External data is large (millions of records) and copying it would hit storage limits
  • Data must always be current — polling/sync introduces staleness
  • Data is read-only in the Salesforce context (Salesforce Connect supports limited write-back)
  • You need to join external data with Salesforce data in relationships (Indirect Lookup, External Lookup)

Deloitte use cases:

  • Showing order history from SAP without migrating all historical orders
  • Displaying real-time inventory levels from a warehouse management system
  • Surfacing customer support tickets from an on-premise ticketing system

Q27. How would you design an error handling and retry strategy for a Salesforce-to-external integration?

Answer:

Layered error handling strategy:

1. HTTP error classification:

  • 4xx (client errors): Don’t retry — the request itself is bad. Log and alert.
  • 429 (rate limited): Retry with exponential backoff.
  • 5xx (server errors): Retry up to 3 times with backoff; escalate if all fail.
  • Timeout: Retry once; if again timeout, put in DLQ.

2. Implementation pattern (Queueable with retry):

public class IntegrationRetryQueueable implements Queueable, Database.AllowsCallouts {
    private Integer retryCount;
    private String payload;
    
    public void execute(QueueableContext ctx) {
        try {
            // make callout
        } catch (CalloutException e) {
            if (retryCount < 3) {
                System.enqueueJob(new IntegrationRetryQueueable(payload, retryCount + 1));
            } else {
                // Log to Integration_Error__c, send alert
            }
        }
    }
}

3. Dead Letter Queue (DLQ):

  • After max retries, persist failed payloads to Integration_Error__c custom object
  • Include: payload, error message, timestamp, retry count, record ID
  • Build an admin UI for manual retry or triage

4. Idempotency:

  • Tag outbound requests with a unique Correlation-Id header
  • Ensures retried calls don’t create duplicate records in the external system

Q28. What is the difference between Inbound and Outbound Salesforce integrations? Give examples of each pattern.

Answer:

Inbound (External → Salesforce): External systems push data INTO Salesforce.

  • Patterns: REST API POST, SOAP API, Bulk API uploads, Platform Event publish from external
  • Examples: E-commerce website creating Salesforce Leads; ERP syncing invoices as custom records; marketing platform pushing campaign responses

Outbound (Salesforce → External): Salesforce pushes data OUT to external systems.

  • Patterns: Outbound Messages (SOAP-based, config-driven), Apex callouts, Platform Event subscribe from external
  • Examples: Creating a ticket in ServiceNow when a Case is escalated; updating inventory in SAP when an Order is fulfilled; sending a customer record to a data warehouse after update

Event-driven hybrid:

  • Salesforce Change Data Capture → External system subscribes via Streaming API
  • External system publishes Platform Event → Salesforce trigger/flow processes it

Deloitte design principle: Always prefer event-driven patterns for decoupled, resilient integrations over point-to-point synchronous callouts when latency allows.


SECTION 4: Data Modeling & SOQL/SOSL


Q29. What is the difference between a Lookup relationship and a Master-Detail relationship in Salesforce?

Answer:

FeatureLookupMaster-Detail
Required fieldNoYes (child always needs parent)
Cascade deleteNoYes (parent deleted = children deleted)
Sharing inheritanceNoYes (child inherits parent’s sharing)
Roll-up summary fieldsNoYes (COUNT, SUM, MIN, MAX on master)
OWD impactNoChild’s OWD controlled by master
Can reparentYesControlled by field setting
Relationships per object402

When to use each:

  • Master-Detail: When child records have no meaning without the parent (e.g., Order Line Items → Order) and you need roll-up summaries.
  • Lookup: When the relationship is optional or the child can exist independently (e.g., Contact → Account — contacts can exist without accounts).

Deloitte tip: Avoid making relationships Master-Detail unless you actually need roll-up summaries or cascade delete. The mandatory parent constraint can cause issues during data loads and integrations.


Q30. How do you write an efficient SOQL query for large datasets? What is selective querying?

Answer:

Selective queries use indexed fields in the WHERE clause so Salesforce can use database indexes rather than doing a full table scan. A query is selective if it returns fewer than 10% of total records (or fewer than 333k records, whichever is lower) using indexed fields.

Standard indexed fields: Id, Name, OwnerId, RecordTypeId, CreatedDate, SystemModStamp, Custom fields marked as External ID or Unique.

Best practices:

// ❌ Non-selective - full table scan on 5M records
SELECT Id, Name FROM Account WHERE Description LIKE '%Deloitte%'

// ✅ Selective - uses indexed field
SELECT Id, Name FROM Account WHERE CreatedDate >= :startDate AND OwnerId = :currentUser

// ✅ Parent-to-child (sub-query) - only 1 SOQL query
SELECT Id, Name, (SELECT Id, Subject FROM Cases ORDER BY CreatedDate DESC LIMIT 5)
FROM Account WHERE Id IN :accountIds

// Use LIMIT and ORDER BY for large result sets
SELECT Id, Name FROM Opportunity 
WHERE StageName = 'Closed Won' 
ORDER BY CloseDate DESC 
LIMIT 200 OFFSET 0

SOQL best practices at scale:

  • Never query without a WHERE clause on large objects
  • Use FOR UPDATE on records you’ll DML within the same transaction (avoids race conditions)
  • Use FOR VIEW / FOR REFERENCE to update LastViewedDate without a full DML

Q31. What is a polymorphic relationship in Salesforce? How do you query it?

Answer:

A polymorphic relationship is when a lookup field can reference multiple different object types. The classic example is the WhoId and WhatId fields on Activities (Tasks/Events).

  • WhoId can point to Contact OR Lead
  • WhatId can point to Account, Opportunity, Case, or any activity-enabled object

Querying polymorphic fields requires TYPEOF in SOQL:

SELECT Id, Subject,
    TYPEOF Who
        WHEN Contact THEN FirstName, LastName, Email
        WHEN Lead THEN FirstName, LastName, Company
    END
FROM Task
WHERE ActivityDate = TODAY

Without TYPEOF, you’d query the common fields (Name, Id) and then check the Type field:

SELECT Id, Who.Type, Who.Name FROM Task

Deloitte context: Custom polymorphic lookups (introduced with External Lookup capabilities) are rare, but understanding Activity polymorphism is critical when building custom activity timelines or integration mappings.


Q32. Explain the WITH SECURITY_ENFORCED clause and stripInaccessible(). When would you use each?

Answer:

Both enforce field-level security (FLS) and object-level security (CRUD) in SOQL/DML, but in different ways.

WITH SECURITY_ENFORCED (SOQL clause):

  • Added directly in SOQL query
  • Throws a QueryException if the running user doesn’t have read access to ANY field in the SELECT clause
  • Fail-fast: query aborts rather than returning partial data
List<Account> accs = [SELECT Id, Name, AnnualRevenue__c FROM Account WITH SECURITY_ENFORCED];

stripInaccessible() (Apex method):

  • Strips inaccessible fields from query results or before DML
  • Does NOT throw an exception — silently removes fields the user can’t see
  • More graceful for building flexible UIs where partial data is acceptable
SObjectAccessDecision decision = Security.stripInaccessible(
    AccessType.READABLE, accountList);
List<Account> accessible = decision.getRecords();

When to use which:

  • Use WITH SECURITY_ENFORCED in internal tools/reports where you want to enforce strict access
  • Use stripInaccessible() in community/portal or consumer-facing code where you want graceful degradation

Q33. What is the HAVING clause in SOQL? Give a practical example.

Answer:

HAVING filters aggregate results in SOQL — it’s the aggregate equivalent of WHERE. You use it when you want to filter groups based on aggregate function values (COUNT, SUM, MIN, MAX, AVG).

// Find Accounts with more than 5 open Opportunities
SELECT AccountId, COUNT(Id) oppCount
FROM Opportunity
WHERE StageName != 'Closed Won' AND StageName != 'Closed Lost'
GROUP BY AccountId
HAVING COUNT(Id) > 5
ORDER BY COUNT(Id) DESC
// Find sales reps with total closed revenue > $1M this year
SELECT OwnerId, SUM(Amount) totalRevenue
FROM Opportunity
WHERE CloseDate = THIS_YEAR AND IsWon = true
GROUP BY OwnerId
HAVING SUM(Amount) > 1000000

Deloitte interview tip: Knowing HAVING vs WHERE is a frequent mid-level distinction. WHERE filters individual rows before grouping; HAVING filters groups after aggregation.


Q34. What are External IDs and how are they used in upsert operations?

Answer:

An External ID is a custom field on a Salesforce object marked as “External ID” — it acts as an alternate unique key that references a record’s ID in an external system.

Use in upsert: Database.upsert() uses the External ID field to determine whether to insert (record doesn’t exist) or update (record with that External ID already exists):

Account acc = new Account();
acc.ERP_ID__c = 'SAP-12345'; // External ID field
acc.Name = 'Deloitte Client Corp';
acc.AnnualRevenue = 5000000;

Database.upsert(acc, Account.ERP_ID__c); // Upsert by External ID

Benefits:

  • Idempotent data loads — re-running the same load won’t create duplicates
  • No need to maintain a mapping table between Salesforce IDs and external IDs
  • Critical for data migrations and ongoing sync integrations

In SOQL/relationships: External IDs also enable Relationship Mapping in Bulk API — you can relate records using external IDs instead of Salesforce IDs, which simplifies data loading.


SECTION 5: Security, Sharing & Governance


Q35. Explain the Salesforce security model: OWD, Role Hierarchy, Sharing Rules, Manual Sharing, and Apex Managed Sharing.

Answer:

Salesforce access control works in layers — each layer can only open access, never restrict beyond OWD:

  1. Object-Level Security (CRUD): Can the user access this object type at all? Controlled by Profiles/Permission Sets.
  2. Field-Level Security (FLS): Can the user see/edit this specific field? Controlled by Profiles/Permission Sets.
  3. Record-Level Security (Sharing):
    • OWD (Org-Wide Defaults): The most restrictive baseline. Private, Public Read Only, or Public Read/Write.
    • Role Hierarchy: Users higher in the hierarchy can see records owned by those below them (if OWD allows upward access).
    • Sharing Rules: Automatically grant additional access to groups of users based on record ownership or criteria.
    • Manual Sharing: Record owners or admins manually share individual records with specific users/groups.
    • Apex Managed Sharing: Programmatically create sharing rules via Share objects (e.g., AccountShare) for complex dynamic sharing logic.

Apex Managed Sharing example:

AccountShare share = new AccountShare();
share.AccountId = accountId;
share.UserOrGroupId = userId;
share.AccountAccessLevel = 'Edit';
share.RowCause = Schema.AccountShare.RowCause.Manual;
insert share;

Q36. What is the difference between a Profile and a Permission Set? When would you use Permission Set Groups?

Answer:

Profile: A collection of settings and permissions that acts as a baseline for a user. Every user must have exactly one profile. Profiles control: object access, field access, tab visibility, page layout assignment, login hours/IP ranges, and app access.

Permission Set: A supplemental collection of permissions that can be added ON TOP of a user’s profile. Users can have multiple permission sets. Used to grant additional access without changing the profile.

Permission Set Group (PSG): A bundle of Permission Sets that can be assigned together. Simplifies assignment — instead of assigning 5 permission sets to every Sales Manager, create a PSG called “Sales Manager Access” and assign one PSG.

Best practice (Deloitte standard):

  • Use the Minimum Access – Salesforce profile as the baseline for all users
  • Grant ALL meaningful permissions via Permission Sets / Permission Set Groups
  • This makes permissions auditable, reusable, and portable across orgs
  • Avoids the proliferation of custom profiles that are hard to maintain

Q37. What is a Permission Set License (PSL) and how does it differ from a User License?

Answer:

User License (e.g., Salesforce, Sales Cloud, Service Cloud): Determines what features and functionality a user can access at the platform level. Assigned one per user.

Permission Set License (PSL): A supplemental license that unlocks specific features that aren’t included in the user’s base license. Multiple PSLs can be assigned to a user.

Examples:

  • A user with a standard Salesforce license gets a Einstein Analytics PSL to access Tableau CRM
  • A user gets an Identity Connect PSL to enable SAML federation features
  • A CRM Analytics Plus PSL enables advanced analytics features

Why it matters at Deloitte: License management is a significant part of Salesforce governance on enterprise engagements. Assigning features without the correct PSL causes activation failures. License audits are performed during health checks.


Q38. What is Shield Platform Encryption and when would a Deloitte client need it?

Answer:

Shield Platform Encryption encrypts data at rest in Salesforce using customer-managed encryption keys (BYOK). It goes beyond Salesforce’s default encryption to meet specific compliance requirements.

Standard Salesforce encryption: Encrypts data at the infrastructure level (Salesforce manages keys).

Shield Platform Encryption adds:

  • Encryption of specific standard and custom fields in the database
  • Customer holds and rotates their own encryption keys
  • Meets requirements for: PCI DSS, HIPAA, GDPR, FedRAMP
  • Encrypts: field data, files, attachments, Chatter data, search indexes

When a Deloitte client needs it:

  • Financial services clients storing PAN/CVV or SSN data (PCI DSS)
  • Healthcare clients with PHI (HIPAA)
  • Government/defense clients (FedRAMP)
  • Any client with data sovereignty requirements where they need to revoke Salesforce’s ability to read data by destroying keys

Trade-offs: Shield Platform Encryption impacts functionality — encrypted fields can’t be used in certain filters, formula fields, or workflow criteria. Deloitte architects must assess the impact before enabling.


Q39. How do you prevent SOQL injection in Apex?

Answer:

SOQL injection occurs when user-supplied input is concatenated directly into a SOQL query string, allowing malicious users to manipulate the query.

Vulnerable code:

// ❌ SOQL INJECTION RISK
String name = ApexPages.currentPage().getParameters().get('name');
String query = 'SELECT Id FROM Account WHERE Name = \'' + name + '\'';
List<Account> accounts = Database.query(query);
// If name = "' OR '1'='1", this returns ALL accounts!

Safe approaches:

  1. Static SOQL with bind variables (preferred):
// ✅ Bind variable - input is never interpreted as SOQL
String name = ApexPages.currentPage().getParameters().get('name');
List<Account> accounts = [SELECT Id FROM Account WHERE Name = :name];
  1. String.escapeSingleQuotes() for dynamic SOQL:
// ✅ Escape the input when dynamic SOQL is unavoidable
String safeName = String.escapeSingleQuotes(name);
String query = 'SELECT Id FROM Account WHERE Name = \'' + safeName + '\'';
  1. Input validation: Whitelist allowed characters, reject inputs with SQL metacharacters for high-security contexts.

Deloitte note: SOQL injection is on the Salesforce Security Review checklist. Always use bind variables for any user-provided input.


SECTION 6: Automation & Flow


Q40. What is the recommended Salesforce automation strategy in 2026? How does it differ from 5 years ago?

Answer:

The 2026 recommended hierarchy (Salesforce official guidance):

  1. Flow (Screen Flows, Record-Triggered Flows, Scheduled Flows) — The primary automation tool. Handles most business logic without code.
  2. Apex Triggers — For complex business logic that Flow can’t handle (advanced queries, callouts, complex object manipulation).
  3. Invocable Apex — Called from Flow when you need to mix declarative and programmatic logic.

What’s deprecated/sunset:

  • Workflow Rules: Retired in 2023. Migrate to Flow.
  • Process Builder: Deprecated, no longer accepting new orgs. Migrate to Flow.
  • Approval Processes: Still supported, no change.

5 years ago: Process Builder was the primary automation tool alongside Workflow Rules. Apex was reached for even moderately complex logic. Today, Flow’s capabilities (looping, subflows, invocable actions, platform events) handle the vast majority of use cases.

Deloitte interview tip: Be ready to explain how you’d migrate a Process Builder process to Flow and the equivalents (PB “Immediate Action” → Flow “Run Immediately,” PB criteria → Flow entry conditions).


Q41. What are the different types of Flows? Explain when you’d use each.

Answer:

Flow TypeTriggerUser InteractionUse Case
Screen FlowUser clicksYesGuided wizards, data entry, multi-step processes
Record-Triggered FlowRecord create/update/deleteNoReplacing Workflow Rules, updating fields, sending emails on record change
Scheduled FlowDate/time scheduleNoBatch processing, sending reminders, nightly cleanup
Platform Event-Triggered FlowPlatform Event receivedNoIntegration processing, event-driven automation
Autolaunched FlowCalled from Apex/ProcessesNoReusable logic invoked programmatically

Deloitte common patterns:

  • Screen Flow in Community/Experience Cloud: Self-service portals where users complete multi-step forms
  • Record-Triggered After-Save Flow: Replacing Process Builder automations (create related records, send emails)
  • Scheduled Flow: Sending 7-day renewal reminders by querying records due for renewal

Q42. What is the difference between a “before-save” and “after-save” Record-Triggered Flow? When would you use each?

Answer:

Before-Save (Fast Field Update):

  • Runs BEFORE the record is written to the database
  • Can update fields on the triggering record itself without an additional DML
  • Much faster — no extra save transaction
  • Cannot access related records that haven’t been saved yet
  • Cannot: create/update/delete other records, make callouts

After-Save:

  • Runs AFTER the record is committed to the database
  • Can create/update/delete related records
  • Can send emails, make callouts (via Apex actions)
  • Must update the triggering record via a separate DML (additional save)

Decision guide:

  • Updating a field on the same record? → Before-Save (e.g., auto-populating a Full Name field from First + Last)
  • Creating a child record? → After-Save (e.g., create a Task when an Opportunity reaches Proposal stage)
  • Both? → Use Before-Save for the same-record field updates + After-Save for related record creation

Q43. How do you call Apex from a Flow? What are @InvocableMethod and @InvocableVariable?

Answer:

@InvocableMethod exposes an Apex method to Flow (and Process Builder) as an Action. It must be public static and accept a List.

@InvocableVariable marks properties within an invocable input/output class as variables that Flow can pass in or receive.

public class CreditCheckAction {
    
    @InvocableMethod(label='Run Credit Check' description='Calls external credit bureau API')
    public static List<Output> runCreditCheck(List<Input> inputs) {
        List<Output> results = new List<Output>();
        for (Input inp : inputs) {
            Output out = new Output();
            out.creditScore = ExternalCreditService.check(inp.ssn);
            out.approved = out.creditScore > 650;
            results.add(out);
        }
        return results;
    }
    
    public class Input {
        @InvocableVariable(required=true)
        public String ssn;
    }
    
    public class Output {
        @InvocableVariable
        public Integer creditScore;
        @InvocableVariable
        public Boolean approved;
    }
}

Deloitte best practice: @InvocableMethod is the bridge between Flow (declarative) and Apex (programmatic). Use it to keep business logic in Apex for complex processing while allowing admins to wire it into Flows without developer intervention.


SECTION 7: DevOps, Testing & Deployment


Q44. What is Salesforce DX (SFDX) and how does it change the development workflow?

Answer:

Salesforce DX (SFDX) is Salesforce’s developer experience toolkit that introduces:

  1. Scratch Orgs: Disposable, source-driven dev/test environments that can be spun up in minutes from a project definition file and destroyed when done.
  2. Source-Based Development: Project metadata lives in source control (Git) as the source of truth — not the org.
  3. Salesforce CLI (sf/sfdx): Command-line tools for org operations, metadata retrieval, deployment, and testing.
  4. Unlocked Packages: Modular packaging of metadata with version tracking and dependency management.

Traditional org-based workflow vs. SFDX:

AspectTraditionalSFDX
Source of truthSandbox orgGit repository
Dev environmentShared sandboxIndividual scratch orgs
DeploymentChange setsCI/CD pipelines (GitHub Actions, Jenkins)
PackagingUnmanaged packagesUnlocked packages

Deloitte standard: All major Salesforce engagements use SFDX with Git (typically GitHub or Azure DevOps) and CI/CD pipelines for automated deployment and test execution.


Q45. What is a good Apex test strategy? What does Deloitte expect in terms of test coverage and quality?

Answer:

Salesforce minimum: 75% code coverage to deploy to production. But at Deloitte, the bar is much higher.

Deloitte test quality standards:

  1. Coverage target: 90%+ coverage, with meaningful assertions — not just “coverage for coverage’s sake.”
  2. Test classes should:
    • Use @TestSetup for shared test data
    • Test both positive (happy path) and negative (error cases, validation failures) scenarios
    • Test bulk behavior — always test with 200 records, not just 1
    • Use Test.startTest() / Test.stopTest() to reset governor limits and force async execution
    • Assert specific outcomes — not just “no exception was thrown”
  3. Mock external services:
    • Use HttpCalloutMock for HTTP callouts
    • Use StubProvider or mock frameworks for service layer isolation
  4. Avoid:
    • SeeAllData=true (flaky, environment-dependent)
    • Testing private methods (test through public interface)
    • Relying on real data in the org
@isTest
static void testBulkAccountUpdate() {
    List<Account> accounts = new List<Account>();
    for (Integer i = 0; i < 200; i++) {
        accounts.add(new Account(Name = 'Test Account ' + i, Industry = 'Technology'));
    }
    insert accounts;
    
    Test.startTest();
    AccountService.updateIndustry(accounts, 'Finance');
    Test.stopTest();
    
    List<Account> updated = [SELECT Industry FROM Account WHERE Id IN :accounts];
    for (Account a : updated) {
        System.assertEquals('Finance', a.Industry, 'Industry should be updated to Finance');
    }
}

Q46. What is a CI/CD pipeline for Salesforce? Describe a typical Deloitte pipeline.

Answer:

A CI/CD (Continuous Integration/Continuous Deployment) pipeline automates the process of validating, testing, and deploying Salesforce metadata from source control to environments.

Typical Deloitte pipeline (GitHub Actions or Azure DevOps):

Developer pushes feature branch
         ↓
[CI - Pull Request Stage]
  1. Lint Apex with PMD (static code analysis)
  2. Validate metadata against a validation org (--checkonly)
  3. Run all Apex test classes
  4. Code review required
         ↓
[Merge to develop]
  5. Auto-deploy to SIT (System Integration Testing) sandbox
         ↓
[Merge to release branch]
  6. Auto-deploy to UAT sandbox
  7. Run regression test suite
         ↓
[Manual approval]
  8. Deploy to Production (with rollback plan)

Key tools:

  • Salesforce CLI (sf): sf project deploy start, sf apex run test
  • PMD: Apex static analysis (detects SOQL injection, missing WITH SECURITY_ENFORCED, etc.)
  • GitHub Actions / Azure Pipelines: Pipeline orchestration
  • Copado / Gearset / Flosum: Salesforce-native DevOps platforms commonly used by Deloitte for complex multi-org pipelines

Q47. How do you handle deployment of destructive changes in Salesforce?

Answer:

Destructive changes are metadata components being deleted from an org (removing a custom field, deleting a Flow, removing an Apex class).

Using Salesforce CLI:

# Create destructiveChanges.xml
# Deploy with destructive manifest
sf project deploy start \
  --manifest package.xml \
  --post-destructive-changes destructiveChanges.xml \
  --target-org production

destructiveChangesPre.xml vs destructiveChangesPost.xml:

  • Pre: Deletions happen BEFORE the package is deployed (use for removing components the new code no longer depends on)
  • Post: Deletions happen AFTER the package is deployed (use for removing components the old code depended on)

Deloitte precautions:

  1. Never deploy destructive changes without a backup — export the component being deleted first
  2. Check data dependencies — deleting a custom field permanently deletes its data in all records
  3. Test in sandbox first — validate the destructive change doesn’t break other automations
  4. Schedule during low-traffic windows — destructive deployments can briefly lock the org
  5. Get client sign-off — on consulting engagements, client approval is mandatory before any data-destructive operation

SECTION 8: Scenario / Consulting-Mindset Questions


Q48. A client reports that their Salesforce org has slowed down significantly after a recent deployment. How do you diagnose and resolve this? (Scenario question)

Answer:

Step 1: Gather context

  • What was deployed? (Apex triggers, Flows, new automation?)
  • Which operations are slow? (Record save? Report? Page load?)
  • Is it all users or specific profiles/record types?

Step 2: Use Salesforce debugging tools

  • Debug Logs: Enable for the affected user/profile. Look for: high CPU time, excessive SOQL queries, nested loops.
  • Event Monitoring: Check ApexExecution and ApexTrigger events for transaction times above baseline.
  • Setup Audit Trail: Confirm what was changed in the deployment.
  • Flow Interview Errors / Debug Logs: If Flows are involved, check for infinite loops or deeply nested subflows.

Step 3: Common culprits

  • Trigger without bulkification: A new trigger doing SOQL inside a loop that’s fine for 1 record but crawls for 50.
  • New Record-Triggered Flow with complex criteria running on every update.
  • After-save automation creating cascading updates on related records, triggering more automation.
  • Non-selective SOQL added to a trigger that runs a full table scan.

Step 4: Resolve

  • Fix the non-bulkified trigger / non-selective query
  • Add a Flow entry criteria to filter the Flow to only relevant records
  • Consolidate redundant automations

Deloitte approach: Always communicate findings to the client before making changes, provide a root cause analysis document, and propose a fix with estimated effort. Never just “fix it silently.”


Q49. A client wants to give external partners access to specific Salesforce data via an API. How would you design the solution? (Architecture question)

Answer:

Requirements clarification I’d ask:

  • What data? (Read-only or read-write?)
  • Authentication method? (OAuth 2.0 preferred? API keys?)
  • Volume? (Real-time queries or bulk exports?)
  • Partner type? (Trusted internal partner vs. third-party vendor?)

Recommended solution: Salesforce Connected App + REST API + Named Credentials

Architecture:

  1. Create a Connected App in Salesforce for each partner (or partner category)
  2. OAuth 2.0 Client Credentials Flow for server-to-server (no user login needed): Partner gets client_id + client_secret → exchanges for access token → calls Salesforce REST API
  3. Create a dedicated Integration User with a Permission Set that grants ONLY the required object/field access (principle of least privilege)
  4. Expose data via Custom REST Apex endpoint if you need to:
    • Aggregate data from multiple objects
    • Apply business logic before returning data
    • Control response format
@RestResource(urlMapping='/partner/accounts/*')
global with sharing class PartnerAccountAPI {
    @HttpGet
    global static List<Account> getAccounts() {
        // Only return fields the partner should see
        return [SELECT Id, Name, Industry, BillingCity 
                FROM Account 
                WHERE Partner_Accessible__c = true
                WITH SECURITY_ENFORCED];
    }
}
  1. Rate limiting: Consider Salesforce API rate limits (standard: 100k calls/24hr for Enterprise). For high-volume partners, use Experience Cloud + Headless API or add MuleSoft throttling.
  2. Audit: Enable Event Monitoring to track all API calls by partner.

Q50. You’ve joined a Deloitte project midway. The existing Salesforce org has no documentation, mixed automation (Workflows, Process Builder, Flows, and Triggers all on the same object), and poor test coverage. How would you approach stabilizing it? (Consulting-mindset question)

Answer:

This is a “technical debt remediation” scenario — common at Deloitte on legacy client orgs.

Phase 1: Assessment (Week 1-2)

  1. Inventory all automation: Use tools like Salesforce Optimizer, Elements.cloud, or FieldTrip to document all Triggers, Flows, Process Builders, Workflow Rules, and Validation Rules per object.
  2. Map the execution order: Document which automation fires in what order on key objects (Account, Opportunity, Case).
  3. Identify conflicts: Look for automation that updates the same field, causing loops or unexpected overwrites.
  4. Assess test coverage: Run sf apex get coverage or check the Apex Test Execution page. Flag all classes below 80%.
  5. Interview business stakeholders: Understand WHAT each automation is supposed to do (often the only documentation).

Phase 2: Stabilize (Weeks 3-6)

  1. Consolidate triggers: If multiple triggers exist on one object, merge into one using the Trigger Handler pattern.
  2. Disable deprecated automation: Turn off Workflow Rules / Process Builder processes, replace with equivalent Flows (document the migration).
  3. Fix test coverage: Start with the lowest-coverage classes that are most at risk. Write meaningful tests, not just coverage padding.
  4. Add a Trigger Bypass mechanism: Custom Metadata-based bypass so automation can be disabled during data loads without deployment.

Phase 3: Document & Govern

  1. Create an Automation Inventory living document (Confluence or SharePoint)
  2. Establish a change governance process — no new automation without peer review
  3. Set up a CI/CD pipeline so future changes go through code review and automated testing
  4. Present findings and roadmap to client in a Technical Debt Report with prioritized remediation

Key Deloitte principle: Never just “fix” things silently. Always present findings, get approval, and document. Clients pay for transparency and a clear plan, not just execution.


🎯 Quick Tips for Your Deloitte Interview

Behavioral expectations:

  • Deloitte interviewers value the STAR method (Situation, Task, Action, Result) for scenario questions
  • Connect technical answers to client impact — cost savings, risk reduction, velocity improvement
  • Demonstrate awareness of governance, scalability, and maintainability — not just “does it work”

Technical red flags to avoid:

  • Saying “I always use triggers” for everything — show you consider Flow-first
  • Ignoring governor limits in theoretical solutions
  • Not mentioning security (FLS, CRUD, sharing) in data access scenarios

Top topics to review additionally:

  • Salesforce CPQ (if applying to product/revenue cloud teams)
  • Experience Cloud (Community portals)
  • Tableau CRM / Analytics Studio
  • Health Cloud / Financial Services Cloud (practice-specific)

Last updated: 2026 | Targeted for: 3–5 years Salesforce Developer experience

Best of Luck


Trusted by 2000+ learners to crack interviews at TCS, Infosys, Wipro, EY, and more.

Want more Real Salesforce Interview Q&As?

For All Job Seekers – 500+ Questions from Top Tech Companies → https://trailheadtitanshub.com/500-real-interview-questions-answers-from-top-tech-companies-ey-infosys-tcs-dell-salesforce-more/

Mega Interview Packs:

 Career Boosters:

Visit us On www.trailheadtitanshub.com

TrailheadTitans

At TrailheadTitans.com, we are dedicated to paving the way for both freshers and experienced professionals in the dynamic world of Salesforce. Founded by Abhishek Kumar Singh, a seasoned professional with a rich background in various IT companies, our platform aims to be the go-to destination for job seekers seeking the latest opportunities and valuable resources.

Related Post

Interview Q & A

Salesforce Flow: The Complete Guide to Automation Mastery

By TrailheadTitans
|
April 11, 2026
Interview Q & A

Apex Programming — A Complete Deep Dive to Crack Your Salesforce Interview

By TrailheadTitans
|
April 5, 2026
Interview Q & A

Struggling with LWC? Here’s the Only Guide You Need

By TrailheadTitans
|
March 29, 2026

Leave a Comment