Skip to content

Conversation

@ZhengPeterWang
Copy link
Collaborator

Description

Only integrates the language model part of the vision language model

Testing Done

Checklist:

  • My PR title strictly follows the format: [Your Priority] Your Title
  • I have attached the testing log above
  • I provide enough comments to my code
  • I have changed documentations
  • I have added tests for my changes

@PinetreePantry
Copy link
Collaborator

Q: Do we only do the intervention on the language model part?

@aryamanarora
Copy link
Collaborator

I think it's okay to start with the LM, like we have implemented for BLIP

@ZhengPeterWang ZhengPeterWang merged commit 9b3e296 into main May 1, 2024
@wcx881212
Copy link

May I ask how to determine the position of the last text token if I want to use a linear probe on LLava? Considering the presence of system prompts and visual tokens.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants